Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Processing with Optimus

You're reading from   Data Processing with Optimus Supercharge big data preparation tasks for analytics and machine learning with Optimus using Dask and PySpark

Arrow left icon
Product type Paperback
Published in Sep 2021
Publisher Packt
ISBN-13 9781801079563
Length 300 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Authors (2):
Arrow left icon
Dr. Argenis Leon Dr. Argenis Leon
Author Profile Icon Dr. Argenis Leon
Dr. Argenis Leon
Luis Aguirre Contreras Luis Aguirre Contreras
Author Profile Icon Luis Aguirre Contreras
Luis Aguirre Contreras
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Section 1: Getting Started with Optimus
2. Chapter 1: Hi Optimus! FREE CHAPTER 3. Chapter 2: Data Loading, Saving, and File Formats 4. Section 2: Optimus – Transform and Rollout
5. Chapter 3: Data Wrangling 6. Chapter 4: Combining, Reshaping, and Aggregating Data 7. Chapter 5: Data Visualization and Profiling 8. Chapter 6: String Clustering 9. Chapter 7: Feature Engineering 10. Section 3: Advanced Features of Optimus
11. Chapter 8: Machine Learning 12. Chapter 9: Natural Language Processing 13. Chapter 10: Hacking Optimus 14. Chapter 11: Optimus as a Web Service 15. Other Books You May Enjoy

Summary

Loading and saving are the most used operations when wrangling data. Optimus creates a flow that can assist in creating connections to data sources that can be reused for loading and saving data. Optimus also implements the most used file storage technologies such as Amazon S3 and Google Cloud Storage, and database connections such as PostgreSQL and MySQL, so that the user can have all the necessary tools at hand to make their work easier.

In terms of databases, we looked at the drivers that are required for every engine/database technology to save and load data from databases.

We also explored how to optimize dataframe memory usage – a very important step if you are handling big data since you could save as much as 50% of your memory space.

In the next chapter, we will start exploring some basic methods for filtering, deduplicating, and transforming data for further analysis.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime