Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Polars Cookbook
Polars Cookbook

Polars Cookbook: Over 60 practical recipes to transform, manipulate, and analyze your data using Python Polars 1.x

eBook
AU$38.99 AU$55.99
Paperback
AU$68.99
Subscription
Free Trial
Renews at AU$24.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $24.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Polars Cookbook

Reading and Writing Files

Reading and writing files is a fundamental step in your data workflows. Your data processing pipelines have both input and output. Learning and knowing how to input and output your data effectively is an essential component for your successful Polars implementations.

Polars provides a way to read and write files while in lazy mode by utilizing the scan and sink methods. They help push down predicates (filters) and projections (column selections) to the scan level and write the output that is larger than RAM streaming results to disk..

In this chapter, you’ll be learning how to read and write various types of file formats.

We will explore the following recipes in this chapter:

  • Reading and writing CSV files
  • Reading and writing Parquet files
  • Reading and writing Delta Lake tables
  • Reading and writing JSON files
  • Reading and writing Excel files
  • Reading and writing other data file formats
  • Reading and writing multiple...

Technical requirements

You can download the datasets and code for this chapter from the GitHub repository:

Also, it is assumed that you have installed the Polars library in your Python environment:

>>> pip install polars

It is also assumed that you imported it in your code:

import polars as pl

Reading and writing CSV files

Comma-separated values (CSV) is one of the most commonly used file formats for storing data. The structure or the way in which you read a CSV file may be familiar to you if you have worked with another DataFrame library such as pandas.

In this recipe, we’ll examine how to read and write a CSV file in Polars with some parameters. We’ll also look at how we can do the same in a LazyFrame.

How to do it...

Here are the steps and examples for how to read and write CSV files in Polars:

  1. Read the customer_shopping_data.csv dataset into a DataFrame:
    df = pl.read_csv('../data/customer_shopping_data.csv')
    df.head()

    The preceding code will return the following output:

 Figure 2.1 – The first five rows of the customer shopping dataset

Figure 2.1 – The first five rows of the customer shopping dataset

  1. If the CSV file doesn’t have a header, Polars would treat the first row as the header:
    df = pl.read_csv('../data/customer_shopping_data_no_header...

Reading and writing Parquet files

The Parquet file format is an open source columnar file format that’s efficient for data storage and processing. This column-oriented format is suitable for analytics workloads and efficient compression. The Parquet file format is very common in big data analytics.

In this recipe, you will learn how to read and write Parquet files in both a DataFrame and LazyFrame.

Getting ready

Toward the end of the recipe, you’ll need the pyarrow library. If you haven’t yet installed it, run the following command:

>>> pip install pyarrow

How to do it...

We’ll first cover reading a Parquet file:

  1. Read a Parquet file:
    parquet_input_file_path = '../data/venture_funding_deals.parquet'
    df = pl.read_parquet(
        parquet_input_file_path,
        columns=['Company', 'Amount', 'Valuation', 'Industry'],
        row_index_name='row_cnt'
    )
    df.head()

    The preceding code...

Reading and writing Delta Lake tables

Delta Lake is an open source storage layer built on top of the Parquet format. Delta Lake has more features than the Parquet format such as versioning and ACID guarantees. It’s basically a Parquet file with some additional benefits.

Many data pipelines nowadays are built in lakehouse architecture, which is a mix of data lakes and warehouses. Delta Lake table is a popular option and is used by many companies. Delta Lake tables can essentially be stored in your data lake but also be queried and used like relational tables. So, Polars being able to work with Delta Lake tables is a big plus.

In this recipe, we’ll look at how to read and write Delta Lake tables with a few useful parameters.

Getting ready

This recipe requires you to install another Python library, deltalake. It’s a dependency required for Polars to work with Delta Lake tables. Run the following command to install it in your Python environment:

&gt...

Reading and writing JSON files

JavaScript Object Notation (JSON) is an open source file format used to store and transport data. It can easily be parsed into a JavaScript object. JSON is language independent and is used in projects with other programming languages that require a lightweight data exchange format. JSON stores and represents data as key-value pairs. In Python terms, JSON is very much like data that is stored in Python dictionaries.

In this recipe, we’ll cover how to read and write JSON files in Polars. We’ll also cover how to work with a different variation of JSON: Newline Delimited JSON (NDJSON). It is also called JSON Lines (JSONL) or Line-Delimited JSON (LDJSON). As the name suggests, each line is a JSON object.

How to do it...

Next, we’ll dive into how to work with JSON files in Polars:

  1. Read a JSON file, showing the first 10 columns:
    df = pl.read_json('../data/world_population.json')
    df.select(df.columns[:10]).head(...

Reading and writing Excel files

We all know that Excel is one of the most popular data analysis tools out there. It still is the one that most of us are familiar with. Being able to work with Excel in Polars is essential for data analysts. In this recipe, we’ll go through reading and writing Excel files, as well as utilizing some of their useful parameters.

Getting ready

This recipe requires a few Python libraries on top of Polars. You can install it with the following command:

>>> pip install xlsx2csv xlsxwriter

How to do it...

We’ll cover how to read and write Excel files using the following steps:

  1. Let’s first read a CSV file into a DataFrame and write it to an Excel file:
    output_file_path = '../data/output/financial_sample_output.xlsx'
    df = pl.read_csv('../data/customer_shopping_data.csv')
    df.write_excel(
        output_file_path,
        worksheet='Output Sheet1',
    &...

Reading and writing other data file formats

There are many other formats aside from the ones introduced earlier. Polars keeps adding features to work with more formats in its frequent updates. We’ll be going over a few other data file formats to read from and write to in Polars.

In this recipe, we’ll cover reading from and/or writing to the Arrow IPC format, Apache Avro, and Apache Iceberg.

These file formats are not as common as the other ones we covered in earlier recipes. However, there are still use cases where companies and people need to work with these formats.

Getting ready

You’ll need to install a few other libraries other than Polars for this recipe. They are pyiceberg, numpy, and pyarrow. Run the following commands in Terminal to install them if you haven’t already:

>>> pip install pyiceberg
>>> pip install numpy
>>> pip install pyarrow

How to do it...

Here are the steps for working with other data...

Reading and writing multiple files

When working on actual data projects, there are cases where data is split into multiple files in a directory. Dealing with each file one by one can be a pain and may distract you from working on other critical components of your project.

In this recipe, we’ll cover reading multiple files into a single DataFrame or into multiple DataFrames, as well as writing a DataFrame to multiple files.

How to do it...

Here are some ways to work with multiple files:

  1. Write a DataFrame to multiple CSV files:
    1. Create a DataFrame:
    data = {'Letter': ['A','B','C'], 'Value': [1,2,3]}
    df = pl.DataFrame(data)
    1. Split it into multiple DataFrames:
    dfs = df.group_by(['Letter'])
    print(dfs)

    The preceding code will return the following output:

    >> <polars.dataframe.group_by.GroupBy object at 0x154373390>
    1. Write them to CSV files:
    for name, df in dfs:
        df.write_csv(f'...

Working with databases

In addition to working with files, it’s very common to work with databases. Polars has integrations with various databases, whether they’re hosted in the cloud or on-premises. Once you understand how to work with one database in Polars, you can apply the same patterns to various other databases.

In this recipe, we’ll specifically look at ways to read from and write to a popular database: Postgres. We’ll look at how to work with cloud databases in Chapter 11, Working with Common Cloud Data Sources.

Getting ready

This recipe requires a few additional dependencies. You’ll need to install the following libraries:

  • connectorx
  • adbc-driver-postgresql
  • pyarrow
  • pg8000 (or psycopg2)

You’ll also need to have a Postgres database on your local machine. You can refer to the following websites for more information on how to install a Postgres database locally:

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Unlock the power of Python Polars for faster and more efficient data analysis workflows
  • Master the fundamentals of Python Polars with step-by-step recipes
  • Discover data manipulation techniques to apply across multiple data problems
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

The Polars Cookbook is a comprehensive, hands-on guide to Python Polars, one of the first resources dedicated to this powerful data processing library. Written by Yuki Kakegawa, a seasoned data analytics consultant who has worked with industry leaders like Microsoft and Stanford Health Care, this book offers targeted, real-world solutions to data processing, manipulation, and analysis challenges. The book also includes a foreword by Marco Gorelli, a core contributor to Polars, ensuring expert insights into Polars' applications. From installation to advanced data operations, you’ll be guided through data manipulation, advanced querying, and performance optimization techniques. You’ll learn to work with large datasets, conduct sophisticated transformations, leverage powerful features like chaining, and understand its caveats. This book also shows you how to integrate Polars with other Python libraries such as pandas, numpy, and PyArrow, and explore deployment strategies for both on-premises and cloud environments like AWS, BigQuery, GCS, Snowflake, and S3. With use cases spanning data engineering, time series analysis, statistical analysis, and machine learning, Polars Cookbook provides essential techniques for optimizing and securing your workflows. By the end of this book, you'll possess the skills to design scalable, efficient, and reliable data processing solutions with Polars.

Who is this book for?

This book is for data analysts, data scientists, and data engineers who want to learn how to use Polars in their workflows. Working knowledge of the Python programming language is required. Experience working with a DataFrame library such as pandas or PySpark will also be helpful.

What you will learn

  • Read from different data sources and write to various files and databases
  • Apply aggregations, window functions, and string manipulations
  • Perform common data tasks such as handling missing values and performing list and array operations
  • Discover how to reshape and tidy your data by pivoting, joining, and concatenating
  • Analyze your time series data in Python Polars
  • Create better workflows with testing and debugging

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Aug 23, 2024
Length: 394 pages
Edition : 1st
Language : English
ISBN-13 : 9781805121152
Category :
Languages :
Concepts :

What do you get with a Packt Subscription?

Free for first 7 days. $24.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Aug 23, 2024
Length: 394 pages
Edition : 1st
Language : English
ISBN-13 : 9781805121152
Category :
Languages :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
AU$24.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
AU$249.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts
AU$349.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total AU$ 220.97
Polars Cookbook
AU$68.99
Expert Data Modeling with Power BI, Second Edition
AU$82.99
Python Data Cleaning Cookbook
AU$68.99
Total AU$ 220.97 Stars icon

Table of Contents

14 Chapters
Chapter 1: Getting Started with Python Polars Chevron down icon Chevron up icon
Chapter 2: Reading and Writing Files Chevron down icon Chevron up icon
Chapter 3: An Introduction to Data Analysis in Python Polars Chevron down icon Chevron up icon
Chapter 4: Data Transformation Techniques Chevron down icon Chevron up icon
Chapter 5: Handling Missing Data Chevron down icon Chevron up icon
Chapter 6: Performing String Manipulations Chevron down icon Chevron up icon
Chapter 7: Working with Nested Data Structures Chevron down icon Chevron up icon
Chapter 8: Reshaping and Tidying Data Chevron down icon Chevron up icon
Chapter 9: Time Series Analysis Chevron down icon Chevron up icon
Chapter 10: Interoperability with Other Python Libraries Chevron down icon Chevron up icon
Chapter 11: Working with Common Cloud Data Sources Chevron down icon Chevron up icon
Chapter 12: Testing and Debugging in Polars Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(5 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
george baptista Oct 29, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
"Polars Cookbook" is a great, practical resource to learn Polars. It has plenty of good examples and opportunities to work through the nuances of various Polars operations.Since this is a "cookbook"-style book, the emphasis is on practical and straightforward to use content. The material is organized around common real-world problems, and provides useful solutions. The code-snippets are clear, clean and easily understandable.I particularly found useful Chapter 7 (Working with Nested Data Structures) and Chapter 8 (Reshaping and Tidying Data). For me those two chapters alone were worth the price of the book.All in all, I highly recommend this book to anyone interested in a hands-on approach to learning Polars.
Amazon Verified review Amazon
anon Sep 29, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Polars Cookbook is an excellent guide to getting started with Polars.When I expressed my frustration with learning Pandas to a friend they gave me a short introduction to Polars and I found the syntax to be exactly what I was looking for.However, I still felt that I needed a more structured introduction to Polars that went a bit deeper. Polars Cookbook fit that need, and after a few chapters I felt ready to take on my first project using Polars.I'd recommend this book to anyone who wants a quick, no-fluff guide to getting started in Polars!
Amazon Verified review Amazon
Daigo Tanaka Sep 29, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
As a Polars newbie, I love Polars Cookbook because I can use it first as a step-by-step tutorial and then as a reference later. The book is thoughtfully organized to be useful both ways. On the table of topics, I loved seeing how it progressed seamlessly from the basic topics to more advanced topics.Starting from how to set up the Polars, the book covers end-to-end topics for data analysts and engineers, from the key concepts that make Polars performant, data I/O, and basic data transformation to practical use cases for analytics, such as handling missing data, string manipulation, and so on. It also covers data engineering topics like cloud data integration, testing, and debugging. All sections come with easy-to-understand code examples and data visualizations when applicable.The author (Yuki Kakegawa) is known for Polars tips on LinkedIn for tens of thousands of followers. I always wished his tips were organized for beginners; this book is a dream come true, and I highly recommend it to everyone who wants to get started with Polars (with or without Python Pandas experience!)
Amazon Verified review Amazon
Alierwai Oct 08, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I recently had the opportunity to review Yuki's book on the Polars Python library, and I must say that Yuki did a wonderful job putting it together. In addition to reviewing his book, I have been following Yuki on LinkedIn for several months and have learned many useful Polars tricks and tips from him. Yuki and Matt Harrison have reignited my interest in learning Polars.Whether you are a beginner looking to learn Polars or a seasoned user needing a reference, this book is an excellent guide. Yuki not only demonstrates the ins and outs of Polars, but he also shows how to integrate other Python packages with Polars. For example, he showcases how to visualize data with the Plotly package (p. 81). Furthermore, he has included a chapter on testing and debugging, covering topics such as performing unit tests with pytest and using Cualle for data quality testing. After reading this chapter, I implemented data quality testing in my work projects."Polars Cookbook" is one of the best Polars books I have read so far, and I highly recommend checking it out.Suggestion/Recommendation:I believe this book would benefit from the inclusion of more real-world datasets, especially when developing the second edition.
Amazon Verified review Amazon
McCall Sep 23, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The author, Yuki, does a great job taking a complex Python library and distilling it down to consumable pieces. I highly recommend if you’re new to Python programming and want to understand how to process datasets.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.