Streaming results
You’ll recall from the beginning of this chapter, in the Querying multifile datasets section, that I mentioned this was the solution for when you had multiple files and the dataset was potentially too large to fit in memory all at one time. So far, the examples we’ve seen used the ToTable
function to completely materialize the results in memory as a single Arrow table. If your results are too large to fit into memory all at one time, this obviously won’t work. In addition to the ToTable
(C++) or to_table
(Python) function we’ve been calling, the scanner also exposes functions that return iterators for streaming record batches from the query.
To demonstrate the streaming, let’s use a public AWS S3 bucket hosted by Ursa Labs, which contains about 10 years of NYC taxi trip record data in Parquet format. The URI for the dataset is s3://ursa-labs-taxi-data/
. Even in Parquet format, the total size of the data there is around 37 GB,...