-
Explore Apache Arrow's data types and integration with pandas, Polars, and Parquet
-
Work with Arrow libraries such as Flight SQL, Acero compute engine, and Dataset APIs for tabular data
-
Enhance and accelerate machine learning data pipelines using Apache Arrow and its subprojects
-
Purchase of the print or Kindle book includes a free PDF eBook
Apache Arrow is an open source, columnar in-memory data format designed for efficient data processing and analytics. This book harnesses the author’s 15 years of experience to show you a standardized way to work with tabular data across various programming languages and environments, enabling high-performance data processing and exchange.
This updated second edition gives you an overview of the Arrow format, highlighting its versatility and benefits through real-world use cases. It guides you through enhancing data science workflows, optimizing performance with Apache Parquet and Spark, and ensuring seamless data translation. You’ll explore data interchange and storage formats, and Arrow's relationships with Parquet, Protocol Buffers, FlatBuffers, JSON, and CSV. You’ll also discover Apache Arrow subprojects, including Flight, SQL, Database Connectivity, and nanoarrow. You’ll learn to streamline machine learning workflows, use Arrow Dataset APIs, and integrate with popular analytical data systems such as Snowflake, Dremio, and DuckDB. The latter chapters provide real-world examples and case studies of products powered by Apache Arrow, providing practical insights into its applications.
By the end of this book, you’ll have all the building blocks to create efficient and powerful analytical services and utilities with Apache Arrow.
This book is for developers, data engineers, and data scientists looking to explore the capabilities of Apache Arrow from the ground up. Whether you’re building utilities for data analytics and query engines, or building full pipelines with tabular data, this book can help you out regardless of your preferred programming language. A basic understanding of data analysis concepts is needed, but not necessary. Code examples are provided using C++, Python, and Go throughout the book.
-
Use Apache Arrow libraries to access data files, both locally and in the cloud
-
Understand the zero-copy elements of the Apache Arrow format
-
Improve the read performance of data pipelines by memory-mapping Arrow files
-
Produce and consume Apache Arrow data efficiently by sharing memory with the C API
-
Leverage the Arrow compute engine, Acero, to perform complex operations
-
Create Arrow Flight servers and clients for transferring data quickly
-
Build the Arrow libraries locally and contribute to the community