Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Data Processing with Optimus
Data Processing with Optimus

Data Processing with Optimus: Supercharge big data preparation tasks for analytics and machine learning with Optimus using Dask and PySpark

Arrow left icon
Profile Icon Dr. Argenis Leon Profile Icon Luis Aguirre Contreras
Arrow right icon
NZ$45.99 NZ$51.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8 (4 Ratings)
eBook Sep 2021 300 pages 1st Edition
eBook
NZ$45.99 NZ$51.99
Paperback
NZ$64.99
Subscription
Free Trial
Arrow left icon
Profile Icon Dr. Argenis Leon Profile Icon Luis Aguirre Contreras
Arrow right icon
NZ$45.99 NZ$51.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8 (4 Ratings)
eBook Sep 2021 300 pages 1st Edition
eBook
NZ$45.99 NZ$51.99
Paperback
NZ$64.99
Subscription
Free Trial
eBook
NZ$45.99 NZ$51.99
Paperback
NZ$64.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Data Processing with Optimus

Chapter 1: Hi Optimus!

Optimus is a Python library that loads, transforms, and saves data, and also focuses on wrangling tabular data. It provides functions that were designed specially to make this job easier for you; it can use multiple engines as backends, such as pandas, cuDF, Spark, and Dask, so that you can process both small and big data efficiently.

Optimus is not a DataFrame technology: it is not a new way to organize data in memory, such as arrow, or a way to handle data in GPUs, such as cuDF. Instead, Optimus relies on these technologies to load, process, explore, and save data.

Having said that, this book is for everyone, mostly data and machine learning engineers, who want to simplify writing code for data processing tasks. It doesn't matter if you want to process small or big data, on your laptop or in a remote cluster, if you want to load data from a database or from remote storage – Optimus provides all the tools you will need to make your data processing task easier.

In this chapter, we will learn about how Optimus was born and all the DataFrame technologies you can use as backends to process data. Then, we will learn about what features separate Optimus from all the various kinds of DataFrame technologies. After that, we will install Optimus and Jupyter Lab so that we will be prepared to code in Chapter 2, Data Loading, Saving, and File Formats.

Finally, we will analyze some of Optimus's internal functions to understand how it works and how you can take advantage of some of the more advanced features.

A key point: this book will not try to explain how every DataFrame technology works. There are plenty of resources on the internet that explain the internals and the day-to-day use of these technologies. Optimus is the result of an attempt to create an expressive and easy to use data API and give the user most of the tools they need to complete the data preparation process in the easiest way possible.

The topics we will be covering in this chapter are as follows:

  • Introducing Optimus
  • Installing everything you need to run Optimus
  • Using Optimus
  • Discovering Optimus internals

Technical requirements

To take full advantage of this chapter, please ensure you implement everything specified in this section.

Optimus can work with multiple backend technologies to process data, including GPUs. For GPUs, Optimus uses RAPIDS, which needs an NVIDIA card. For more information about the requirements, please go to the GPU configuration section.

To use RAPIDS on Windows 10, you will need the following:

  • Windows 10 version 2004 (OS build 202001.1000 or later)
  • CUDA version 455.41 in CUDA SDK v11.1

You can find all the code for this chapter in https://github.com/PacktPublishing/Data-Processing-with-Optimus.

Introducing Optimus

Development of Optimus began with work being conducted for another project. In 2016, Alberto Bonsanto, Hugo Reyes, and I had an ongoing big data project for a national retail business in Venezuela. We learned how to use PySpark and Trifacta to prepare and clean data and to find buying patterns.

But problems soon arose for both technologies: the data had different category/product names over the years, a 10-level categorization tree, and came from different sources, including CSV files, Excel files, and databases, which added an extra process to our workflow and could not be easily wrangled. On the other hand, when we tried Trifacta, we needed to learn its unique syntax. It also lacked some features we needed, such as the ability to remove a single character from every column in the dataset. In addition to that, the tool was closed source.

We thought we could do better. We wanted to write an open source, user-friendly library in Python that would let any non-experienced user apply functions to clean, prepare, and plot big data using PySpark.

From this, Optimus was born.

After that, we integrated other technologies. The first one we wanted to include was cuDF, which supports processing data 20x faster; soon after that, we also integrated Dask, Dask-cuDF, and Ibis. You may be wondering, why so many DataFrame technologies? To answer that, we need to understand a little bit more about how each one works.

Exploring the DataFrame technologies

There are many different well-known DataFrame technologies available today. Optimus can process data using one or many of those available technologies, including pandas, Dask, cuDF, Dask-cuDF, Spark, Vaex, or Ibis.

Let's look at some of the ones that work with Optimus:

  • pandas is, without a doubt, one of the more popular DataFrame technologies. If you work with data in Python, you probably use pandas a lot, but it has an important caveat: pandas cannot handle multi-core processing. This means that you cannot use all the power that modern CPUs can give you, which means you need to find a hacky way to use all the cores with pandas. Also, you cannot process data volumes greater than the memory available in RAM, so you need to write code to process your data in chunks.
  • Dask came out to help parallelize Python data processing. In Dask, we have the Dask DataFrame, an equivalent to the pandas DataFrame, that can be processed in parallel using multiple cores, as well as with nodes in a cluster. This gives the user the power to scale out data processing to hundreds of machines in a cluster. You can start 100 machines, process your data, and shut them down, quickly, easily, and cheaply. Also, it supports out-of-core processing, which means that it can process data volumes greater than the memory available in RAM.
  • At the user level, cuDF and Dask-cuDF work in almost the same way as pandas and Dask, but up to 20x faster for most operations when using GPUs. Although GPUs are expensive, they give you more value for money compared to CPUs because they can process data much faster.
  • Vaex is growing in relevance in the DataFrame landscape. It can process data out-of-core, is easier to use than Dask and PySpark, and is optimized to process string and stats in parallel because of its underlying C language implementation.
  • Ibis is gaining traction too. The amazing thing about Ibis is that it can use multiple engines (like Optimus but focused on SQL engines) and you can write code in Python that can be translated into SQL to be used in Impala, MySQL, PostgreSQL, and so on.

The following table provides a quick-glance comparison of several of these technologies:

Figure 1.1 – DataFrame technologies and capabilities available in Optimus

(*) Depends on the engine that's been configured

Figure 1.1 – DataFrame technologies and capabilities available in Optimus

There are some clear guidelines regarding when to use each engine:

  • Use pandas if the DataFrame fits comfortably in memory, or cuDF if you have GPUs and the data fits in memory. This is almost always faster than using distributed DataFrame technologies under the same circumstances. This works best for real-time or near-real-time data processing.
  • Use Dask if you need to process data greater than memory, and Dask-cuDF if you have data larger than memory and a multi-core and/or multi-node GPU infrastructure.
  • Use Vaex if you have a single machine and data larger than memory, or Spark if you need to process data at terabyte scale. This is slow for small datasets/datasets that fit in memory.

Now that you understand this, you can unleash Optimus's magic and start preparing data using the same Optimus API in any of the engines available.

Examining Optimus design principles

A key point about Optimus is that we are not trying to create a new DataFrame technology. As we've already seen, there are actually many amazing options that cover almost any use case. The purpose of Optimus is to simplify how users handle data and give that power to people who may not have any technical expertise. For that reason, Optimus follows three principles:

  • One API to rule them all.
  • Knowing the technology is optional.
  • Data types should be as rich as possible.

What do these mean? Let's look at each in detail.

One API to rule them all

Almost all DataFrame technologies try to mimic the pandas API. However, there are subtle differences regarding what the same function can do, depending on how you apply it; with Optimus, we want to abstract all this.

We'll go into more detail about this later, but here's a quick example: you can calculate a column square root using the .cols accessor, like so:

from optimus import Optimus
op = Optimus("dask")
df = op.create.dataframe({"A":[0,-1,2,3,4,5]})
df = df.cols.sqrt("A")

If you want to switch from Dask to any other engine, you can use any of the following values. Each one will instantiate a different class of the Optimus DataFrame:

  • "pandas" to use Pandas. This will instantiate a pandas DataFrame.
  • "dask" to use Dask. This will instantiate a DaskDataFrame.
  • "cudf" to use cuDF. This will instantiate a CUDFDataFrame.
  • "dask_cudf" to use Dask-cuDF. This will instantiate a DaskCUDFDataFrame.
  • "spark" to use PySpark. This will instantiate a SparkDataFrame.
  • "vaex" to use Vaex. This will instantiate a VaexDataFrame.
  • "ibis" to use Ibis. This will instantiate an IbisDataFrame.

An amazing thing about this flexibility is that you can process a sample of the data on your laptop using pandas, and then send a job to Dask-cuDF or a Spark cluster to process it using the faster engine.

Knowing the technical details is optional

pandas is complex. Users need to handle technical details such as rows, index, series, and masks, and you need to go low level and use NumPy/Numba to get all the power from your CPU/GPU.

With Numba, users can gain serious speed improvements when processing numerical data. It translates Python functions into optimized machine code at runtime. This simply means that we can write faster functions on CPU or GPU. For example, when we request a histogram using Optimus, the minimum and maximum values of a column are calculated in a single pass.

In Optimus, we try to take the faster approach for every operation, without requiring extensive knowledge of the technical details, to take full advantage of the technology. That is Optimus's job.

Some other DataFrame features that are abstracted in Optimus include indices, series, and masks (the exception is PySpark). In Optimus, you only have columns and rows; the intention is to use familiar concepts from spreadsheets so that you can have a soft landing when you start using Optimus.

In Optimus, you have two main accessors, .cols and .rows, that provide most of the transformation methods that are available. For example, you can use df.cols.lower to transform all the values of a column into lowercase, while you can use df.rows.drop_duplicates to drop duplicated rows in the entire dataset. Examples of these will be addressed later in this book.

Data types should be as rich as possible

All DataFrame technologies have data types to represent integers, decimals, time, and dates. In pandas and Dask, you can use NumPy data types to assign diverse types of integers such as int8, int16, int32, and int64, or different decimals types, such as float32, float64, and so on.

This gives the user a lot of control to optimize how the data is saved and reduces the total size of data in memory and on disk. For example, if you have 1 million rows with values between 1 and 10, you can save the data as uint8 instead of inf64 to reduce the data size.

Besides this internal data representation, Optimus can infer and detect a richer set of data types so that you can understand what data in a column matches a specific data type (URL, email, date, and so on) and then apply specific functions to handle it.

In Optimus, we use the term quality to express three data characteristics:

  • Number of values that match the data type being inferred
  • Number of values that differ from the data type being inferred
  • Number of missing values

Using the df.cols.quality method, Optimus can infer the data type of every loaded column and return how many values in the column match its data types. In the case of date data types, Optimus can infer the date format.

The following list shows all the data types that Optimus can detect:

  • Integer.
  • Strings.
  • Email.
  • URL.
  • Gender.
  • Boolean.
  • US ZIP code.
  • Credit card.
  • Time and date format.
  • Object.
  • Array.
  • Phone number.
  • Social security number.
  • HTTP code.

Many of these data types have special functions that you can use, as follows:

  • URL: Schemas, domains, extensions, and query strings
  • Date: Year, month, and day
  • Time: Hours, minutes, and seconds
  • Email: domains and domain extensions

The best part of this is that you can define your own data types and create special functions for them. We will learn more about this later in this book. We will also learn about the functions we can use to process or remove matches, mismatches, and missing values.

Now that we've had a look at how Optimus works, let's get it running on your machine.

Installing everything you need to run Optimus

To start using Optimus, you will need a laptop with Windows, Ubuntu, or macOS installed with support for Python, PIP packages, and Conda. If you are new to Python, PIP is the main package manager. It allows Python users to install and manage packages that expand the Python standard library's functionality.

The easiest way to install Optimus is through PIP. It will allow us to start running examples in just a few minutes. Later in this section, we will see some examples of Optimus running on a notebook, on a shell terminal, and on a file read by Python, but before that, we will need to install Optimus and its dependencies.

First, let's install Anaconda.

Installing Anaconda

Anaconda is a free and open source distribution of the Python and R programming languages. The distribution comes with the Python interpreter, Conda, and various packages related to machine learning and data science so that you can start easier and faster.

To install Anaconda on any system, you can use an installer or install it through a system package manager. In the case of Linux and macOS, you can install Anaconda using APT or Homebrew, respectively.

On Linux, use the following command:

sudo apt-get install anaconda # on Linux

For macOS and Windows, go to https://www.anaconda.com/products/individual. Download the Windows file that best matches your system and double-click the file after downloading it to start the installation process:

brew cask install anaconda # on macOS

With Anaconda now installed, let's install Optimus.

Installing Optimus

With Anaconda installed, we can use Conda to install Optimus:

As stated on the Conda website, Conda is provides "package, dependency, and environment management for any language." With Conda, we can manage multiple Python environments without polluting our system with dependencies. For example, you could create a Conda environment that uses Python 3.8 and pandas 0.25, and another with Python 3.7 and pandas 1.0. Let's take a look:

  1. To start, we need to open the Anaconda Prompt. This is just the command-line interface that comes with Conda:
    • For Windows: From the Start menu, search for and open Anaconda Prompt.
    • For macOS: Open Launchpad and click the Terminal icon.
    • For Linux: Open a Terminal window.
  2. Now, in the terminal, we are going to create a Conda environment named Optimus to create a clean Optimus installation:
    conda create -n optimus python=3.8
  3. Now, you need to change from the (base) environment to the (optimus) environment using the following command:
    conda activate optimus
  4. Running the following command on your terminal will install Optimus with its basic features, ready to be tested:
    pip install pyoptimus
  5. If you have done this correctly, running a simple test will tell us that everything is correct:
    python -c 'import optimus; optimus.__version__'

    Now, we are ready to use Optimus!

We recommend using Jupyter Notebook, too.

Installing JupyterLab

If you have not been living under a rock the last 5 years, you probably know about Jupyter Notebook. JupyterLab is the next generation of Jupyter Notebook: it is a web-based interactive development environment for coding. Jupyter (for short) will help us test and modify our code easily and will help us try out our code faster. Let's take a look:

  1. To install JupyterLab, go to the Terminal, as explained in the Installing Optimus section, and run the following command:
    conda install -c conda-forge jupyterlab
  2. At this point, you could simply run Jupyter. However, we are going to install a couple of handy extensions to debug Dask and track down GPU utilization and RAM:
    conda install nodejs
    conda install -c conda-forge dask-labextension
    jupyter labextension install dask-labextension
    jupyter serverextension enable dask_labextension
  3. Now, let's run Jupyter using the following command:
    jupyter lab --ip=0.0.0.0
  4. You can access Jupyter using any browser:
Figure 1.2 – JupyterLab UI

Figure 1.2 – JupyterLab UI

Next, let's look at how to install RAPIDS.

Installing RAPIDS

There are some extra steps you must take if you want to use a GPU engine with Optimus.

RAPIDS is a set of libraries developed by NVIDIA for handling end-to-end data science pipelines using GPUs; cuDF and Dask-cuDF are among these libraries. Optimus can use both to process data in a local and distributed way.

For RAPIDS to work, you will need a GPU, NVIDIA Pascal™ or better, with a compute capability of 6.0+. You can check the compute capability by looking at the tables on the NVIDIA website: bit.ly/cc_gc.

First, let's install RAPIDS on Windows.

Installing RAPIDS on Windows 10

RAPIDS is not fully supported at the time of writing (December 2020), so you must use the Windows Subsystem for Linux version 2 (WSL2). WSL is a Windows 10 feature that enables you to run native Linux command-line tools directly on Windows.

You will need the following:

  • Windows 10 version 2004 (OS build 202001.1000 or later). You must sign up to get Windows Insider Preview versions, specifically to the Developer Channel. This is required for the WSL2 VM to have GPU access: https://insider.windows.com/en-us/.
  • CUDA version 455.41 in CUDA SDK v11.1. You must use a special version of the NVIDA CUDA drivers, which you can get by downloading them from NVIDIA's site. You must join the NVIDIA Developer Program to get access to the version; searching for WSL2 CUDA Driver should lead you to it.

Here are the steps:

  1. Install the developer preview version of Windows. Make sure that you click the checkbox next to Update to install other recommended updates too.
  2. Install the Windows CUDA driver from the NVIDIA Developer Program.
  3. Enable WSL 2 by enabling the Virtual Machine Platform optional feature. You can find more steps here: https://docs.microsoft.com/en-us/windows/wsl/install-win10.
  4. Install WSL from the Windows Store (Ubuntu-20.04 is confirmed to be working).
  5. Install Python on the WSL VM, tested with Anaconda.
  6. Go to the Installing RAPIDS section of this chapter.

Installing RAPIDS on Linux

First, you need to install the CUDA and NVIDIA drivers. Pay special attention if your machine is running code that depends on a specific CUDA version. For more information about the compatibility between the CUDA and NVIDIA drivers, check out bit.ly/cuda_c.

If you do not have a compatible GPU, you can use a cloud provider such as Google Cloud Platform, Amazon, or Azure.

In this case, we are going to use Google Cloud Platform. As of December 2020, you can get an account with 300 USD on it to use. After creating an account, you can set up a VM instance to install RAPIDS.

To create a VM instance on Google Cloud Platform, follow these steps:

  1. First, go to the hamburger menu, click Compute Engine, and select VM Instances.
  2. Click on CREATE INSTANCE. You will be presented with a screen that looks like this:
    Figure 1.3 – Google Cloud Platform instance creation

    Figure 1.3 – Google Cloud Platform instance creation

  3. Select a region that can provide a GPU. Not all zones have GPUs available. For a full list, check out https://cloud.google.com/compute/docs/gpus.
  4. Make sure you choose N1 series from the dropdown.
  5. Be sure to select an OS that's compatible with the CUDA drivers (check the options available here: https://developer.nvidia.com/cuda-downloads). After the installation, you will be using 30 GB of storage space, so make sure you assign enough disk space:
    Figure 1.4 – Google Cloud Platform OS selection

    Figure 1.4 – Google Cloud Platform OS selection

  6. Check the Allow HTTP traffic option:
    Figure 1.5 – Google Cloud Platform OS selection

    Figure 1.5 – Google Cloud Platform OS selection

  7. To finish, click the Create button at the bottom of the page:
Figure 1.6 – Google Cloud instance creation

Figure 1.6 – Google Cloud instance creation

Now, you are ready to install RAPIDS.

Installing RAPIDS

After checking that your GPU works with Optimus, go to https://rapids.ai/start.html. Select the options that match your requirements and copy the output from the command section to your command-line interface:

Figure 1.7 – Google Cloud Platform OS selection

Figure 1.7 – Google Cloud Platform OS selection

After the installation process is complete, you can test RAPIDS by importing the library and getting its version:

python -c 'import cudf; cudf.__version__'

Next, let's learn how to install Coiled for easier setups.

Using Coiled

Coiled is a deployment-as-a-service library for scaling Python that facilitates Dask and Dask-cuDF clusters for users. It takes the DevOps out of the data role to enable data professionals to spend less time setting up networking, managing fleets of Docker images, creating AWS IAM roles, and other setups they would have to handle otherwise, so that they can spend more time on their real job.

To use a Coiled cluster on Optimus, we can just pass minimal configuration to our Optimus initialization function and include our token provided by Coiled in a parameter; to get this token, you must create an account at https://cloud.coiled.io and get the token from your dashboard, like so:

op = Optimus(coiled_token="<your token here>", n_workers=2)

In this example, we initialized Optimus using a Coiled token, and set the number of workers to 2. Optimus will initialize a Dask DataFrame and handle the connection to the cluster that was created by Coiled. After this, Optimus will work as normal.

When using Coiled, it's important to maintain the same versions between the packages in the remote cluster and the packages in your local machine. For this, you can install a Coiled software environment as a local conda environment using its command-line tool. To use Optimus, we will use a specific software environment called optimus/default:

coiled install optimus/default 
conda activate coiled-optimus-default

In the preceding example, we told coiled install to create the conda environment and then used conda activate to start using it.

Using a Docker container

If you know how to use Docker and you have it installed on your system, you can use it to quickly set up Optimus in a working environment.

To use Optimus in a Docker environment, simply run the following command:

docker run -p 8888:8888 --network="host" optimus-df/optimus:latest

This will pull the latest version of the Optimus image from Docker Hub and run a notebook process inside it. You will see something like the following:

To access the notebook, open this file in a browser:
    file://...
Or copy and paste one of these URLs:
    http://127.0.0.1:8888/?token=<GENERATED TOKEN>

Just copy the address and paste it into your browser, making sure it has the same token, and you'll be using a notebook with Optimus installed in its environment.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Load, merge, and save small and big data efficiently with Optimus
  • Learn Optimus functions for data analytics, feature engineering, machine learning, cross-validation, and NLP
  • Discover how Optimus improves other data frame technologies and helps you speed up your data processing tasks

Description

Optimus is a Python library that works as a unified API for data cleaning, processing, and merging data. It can be used for handling small and big data on your local laptop or on remote clusters using CPUs or GPUs. The book begins by covering the internals of Optimus and how it works in tandem with the existing technologies to serve your data processing needs. You'll then learn how to use Optimus for loading and saving data from text data formats such as CSV and JSON files, exploring binary files such as Excel, and for columnar data processing with Parquet, Avro, and OCR. Next, you'll get to grips with the profiler and its data types - a unique feature of Optimus Dataframe that assists with data quality. You'll see how to use the plots available in Optimus such as histogram, frequency charts, and scatter and box plots, and understand how Optimus lets you connect to libraries such as Plotly and Altair. You'll also delve into advanced applications such as feature engineering, machine learning, cross-validation, and natural language processing functions and explore the advancements in Optimus. Finally, you'll learn how to create data cleaning and transformation functions and add a hypothetical new data processing engine with Optimus. By the end of this book, you'll be able to improve your data science workflow with Optimus easily.

Who is this book for?

This book is for Python developers who want to explore, transform, and prepare big data for machine learning, analytics, and reporting using Optimus, a unified API to work with Pandas, Dask, cuDF, Dask-cuDF, Vaex, and Spark. Although not necessary, beginner-level knowledge of Python will be helpful. Basic knowledge of the CLI is required to install Optimus and its requirements. For using GPU technologies, you'll need an NVIDIA graphics card compatible with NVIDIA's RAPIDS library, which is compatible with Windows 10 and Linux.

What you will learn

  • Use over 100 data processing functions over columns and other string-like values
  • Reshape and pivot data to get the output in the required format
  • Find out how to plot histograms, frequency charts, scatter plots, box plots, and more
  • Connect Optimus with popular Python visualization libraries such as Plotly and Altair
  • Apply string clustering techniques to normalize strings
  • Discover functions to explore, fix, and remove poor quality data
  • Use advanced techniques to remove outliers from your data
  • Add engines and custom functions to clean, process, and merge data

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 03, 2021
Length: 300 pages
Edition : 1st
Language : English
ISBN-13 : 9781801077750
Vendor :
Apache
Category :
Languages :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Sep 03, 2021
Length: 300 pages
Edition : 1st
Language : English
ISBN-13 : 9781801077750
Vendor :
Apache
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total NZ$ 174.97
Building Big Data Pipelines with Apache Beam
NZ$71.99
Data Processing with Optimus
NZ$64.99
Hands-On Big Data Analytics with PySpark
NZ$37.99
Total NZ$ 174.97 Stars icon

Table of Contents

15 Chapters
Section 1: Getting Started with Optimus Chevron down icon Chevron up icon
Chapter 1: Hi Optimus! Chevron down icon Chevron up icon
Chapter 2: Data Loading, Saving, and File Formats Chevron down icon Chevron up icon
Section 2: Optimus – Transform and Rollout Chevron down icon Chevron up icon
Chapter 3: Data Wrangling Chevron down icon Chevron up icon
Chapter 4: Combining, Reshaping, and Aggregating Data Chevron down icon Chevron up icon
Chapter 5: Data Visualization and Profiling Chevron down icon Chevron up icon
Chapter 6: String Clustering Chevron down icon Chevron up icon
Chapter 7: Feature Engineering Chevron down icon Chevron up icon
Section 3: Advanced Features of Optimus Chevron down icon Chevron up icon
Chapter 8: Machine Learning Chevron down icon Chevron up icon
Chapter 9: Natural Language Processing Chevron down icon Chevron up icon
Chapter 10: Hacking Optimus Chevron down icon Chevron up icon
Chapter 11: Optimus as a Web Service Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8
(4 Ratings)
5 star 75%
4 star 25%
3 star 0%
2 star 0%
1 star 0%
Manuel Perche Sep 04, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
As an enthusiast of Python and big data I found this book very useful. It explains everything in detail and provides the necessary context for developers just starting to learn, Optimus provides all the tools and simplifies what you need to handle data processing.
Amazon Verified review Amazon
Javier Sep 08, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Currently I work on a major tech company in Latin America and I can say that this book is a very valuable asset since it greatly speeds up the process of working with all kinds of datasets as well as processing units like CPU, GPU and so fort.The data wrangling section teaches you everything you need to know regarding the possible data transformation using different functions all trough the advanced features of optimus, it is highly accessible singe it lets you hack with it in order to implement your own engines if necessary.Definitely a great choice in the Data Science field.
Amazon Verified review Amazon
Maria Rubio Sep 26, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is incredibly valuable for those learning Python and Pandas and looking to improve their workflow. Optimus makes data processing better, and this detailed guide helped me achieve just that.
Amazon Verified review Amazon
srija Oct 13, 2021
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This book is excellent to expertise in pandas and their applications. Also, it is a detailed guide for learning Optimus, a great tool that simplifies the tedious data processing tasks.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.