Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learning PySpark

You're reading from   Learning PySpark Build data-intensive applications locally and deploy at scale using the combined powers of Python and Spark 2.0

Arrow left icon
Product type Paperback
Published in Feb 2017
Publisher Packt
ISBN-13 9781786463708
Length 274 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Denny Lee Denny Lee
Author Profile Icon Denny Lee
Denny Lee
Tomasz Drabas Tomasz Drabas
Author Profile Icon Tomasz Drabas
Tomasz Drabas
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Understanding Spark FREE CHAPTER 2. Resilient Distributed Datasets 3. DataFrames 4. Prepare Data for Modeling 5. Introducing MLlib 6. Introducing the ML Package 7. GraphFrames 8. TensorFrames 9. Polyglot Persistence with Blaze 10. Structured Streaming 11. Packaging Spark Applications Index

Chapter 3. DataFrames

A DataFrame is an immutable distributed collection of data that is organized into named columns analogous to a table in a relational database. Introduced as an experimental feature within Apache Spark 1.0 as SchemaRDD, they were renamed to DataFrames as part of the Apache Spark 1.3 release. For readers who are familiar with Python Pandas DataFrame or R DataFrame, a Spark DataFrame is a similar concept in that it allows users to easily work with structured data (for example, data tables); there are some differences as well so please temper your expectations.

By imposing a structure onto a distributed collection of data, this allows Spark users to query structured data in Spark SQL or using expression methods (instead of lambdas). In this chapter, we will include code samples using both methods. By structuring your data, this allows the Apache Spark engine – specifically, the Catalyst Optimizer – to significantly improve the performance of Spark...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime