Who this book is for
This book is for developers, data analysts, and data scientists looking to explore the capabilities of Apache Arrow from the ground up. This book will also be useful for any engineers who are working on building utilities for data analytics, query engines, or otherwise working with tabular data, regardless of the language they are programming in.
What this book covers
Chapter 1, Getting Started with Apache Arrow, introduces you to the basic concepts underpinning Apache Arrow. It introduces and explains the Arrow format and the data types it supports, along with how they are represented in memory. Afterward, you'll set up your development environment and run some simple code examples showing the basic operation of Arrow libraries.
Chapter 2, Working with Key Arrow Specifications, continues your introduction to Apache Arrow by explaining how to read both local and remote data files using different formats. You'll learn how to integrate Arrow with the Python pandas library and how to utilize the zero-copy aspects of Arrow to share memory for performance.
Chapter 3, Data Science with Apache Arrow, wraps up our initial overview by providing specific examples to enhance data science workflows. This will include practical examples of using Arrow with Apache Spark and Jupyter, along with using Arrow-formatted data to create a chart. This will be followed by a brief discussion on Open Database Connectivity (ODBC) and an end-to-end demonstration of ingesting Arrow-formatted data into an Elasticsearch index and then querying it.
Chapter 4, Format and Memory Handling, discusses the relationships between Apache Arrow and Parquet, Feather, Protocol Buffers, JSON, and CSV data, along with when and why to use these different formats. Following this, the Arrow IPC format is introduced and described, along with an explanation of using memory mapping to further improve performance.
Chapter 5, Crossing the Language Barrier with the Arrow C Data API, introduces the titular C Data API for efficiently passing Apache Arrow data between different language runtimes. This chapter will cover the struct definitions utilized for this interface along with describing use cases that make it beneficial.
Chapter 6, Leveraging the Arrow Compute APIs, describes how to utilize the Arrow Compute APIs in both C++ and Python. You'll learn when and why you should use the Compute APIs to perform analytics rather than implement something yourself.
Chapter 7, Using the Arrow Datasets API, demonstrates querying, filtering, and otherwise interacting with multi-file datasets that can potentially be across multiple sources. Partitioned datasets are also covered, along with utilizing the Arrow Compute API to perform streaming filtering and other operations on the data.
Chapter 8, Exploring Apache Arrow Flight RPC, examines the Flight RPC protocol and its benefits. You will be walked through building a simple Flight server and client in multiple languages to produce and consume tabular data.
Chapter 9, Powered By Apache Arrow, provides a few examples of current real-world usage of Arrow, such as Dremio and Spice.ai.
Chapter 10, How to Leave Your Mark on Arrow, provides a brief introduction to contributing to open source in general, but specifically, how to contribute to the Arrow project itself. You will be walked through finding starter issues and setting up your first pull request to make a contribution, and what to expect when doing so. To that end, this chapter also contains various instructions on locally building the Arrow C++, Python, and Go libraries to test your contribution.
Chapter 11, Future Development and Plans, wraps up the book by examining the features that are still in heavy development at the time of writing. FlightSQL, DataFusion, and Substrait are all briefly explained and covered here with what to look forward to and, potentially, contribute to. Finally, there are some parting words and a challenge from me to you.