Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
In-Memory Analytics with Apache Arrow

You're reading from   In-Memory Analytics with Apache Arrow Accelerate data analytics for efficient processing of flat and hierarchical data structures

Arrow left icon
Product type Paperback
Published in Sep 2024
Publisher Packt
ISBN-13 9781835461228
Length 406 pages
Edition 2nd Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Matthew Topol Matthew Topol
Author Profile Icon Matthew Topol
Matthew Topol
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Part 1: Overview of What Arrow is, Its Capabilities, Benefits, and Goals FREE CHAPTER
2. Chapter 1: Getting Started with Apache Arrow 3. Chapter 2: Working with Key Arrow Specifications 4. Chapter 3: Format and Memory Handling 5. Part 2: Interoperability with Arrow: The Power of Open Standards
6. Chapter 4: Crossing the Language Barrier with the Arrow C Data API 7. Chapter 5: Acero: A Streaming Arrow Execution Engine 8. Chapter 6: Using the Arrow Datasets API 9. Chapter 7: Exploring Apache Arrow Flight RPC 10. Chapter 8: Understanding Arrow Database Connectivity (ADBC) 11. Chapter 9: Using Arrow with Machine Learning Workflows 12. Part 3: Real-World Examples, Use Cases, and Future Development
13. Chapter 10: Powered by Apache Arrow 14. Chapter 11: How to Leave Your Mark on Arrow 15. Chapter 12: Future Development and Plans 16. Index 17. Other Books You May Enjoy

More GPU, more speed!

Since the standard utilities and tools for handling ML workflows all operate on tensors, there’s a need for a stable, in-memory data structure that can be used for interoperability between those frameworks. The trick to this is that any protocol for sharing this information would need to also be able to define what device the memory is allocated on. In the Python data community, one such standard that has been widely adopted is DLPack.

A note about GPUs

While we’ve mentioned GPUs before, I wanted to note something just in case you aren’t as familiar with why GPUs improve performance for these workflows. GPUs, being specialized, are explicitly more performant and efficient at performing specific types of data transformation and computations, particularly when working with tensors and vector math. The difficulty has always been in writing code for them, along with the cost of copying data between the CPU and GPU frequently. Thus, by allowing...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime