Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learning PySpark

You're reading from   Learning PySpark Build data-intensive applications locally and deploy at scale using the combined powers of Python and Spark 2.0

Arrow left icon
Product type Paperback
Published in Feb 2017
Publisher Packt
ISBN-13 9781786463708
Length 274 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Denny Lee Denny Lee
Author Profile Icon Denny Lee
Denny Lee
Tomasz Drabas Tomasz Drabas
Author Profile Icon Tomasz Drabas
Tomasz Drabas
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Understanding Spark FREE CHAPTER 2. Resilient Distributed Datasets 3. DataFrames 4. Prepare Data for Modeling 5. Introducing MLlib 6. Introducing the ML Package 7. GraphFrames 8. TensorFrames 9. Polyglot Persistence with Blaze 10. Structured Streaming 11. Packaging Spark Applications Index

What is Apache Spark?

Apache Spark is an open-source powerful distributed querying and processing engine. It provides flexibility and extensibility of MapReduce but at significantly higher speeds: Up to 100 times faster than Apache Hadoop when data is stored in memory and up to 10 times when accessing disk.

Apache Spark allows the user to read, transform, and aggregate data, as well as train and deploy sophisticated statistical models with ease. The Spark APIs are accessible in Java, Scala, Python, R and SQL. Apache Spark can be used to build applications or package them up as libraries to be deployed on a cluster or perform quick analytics interactively through notebooks (like, for instance, Jupyter, Spark-Notebook, Databricks notebooks, and Apache Zeppelin).

Apache Spark exposes a host of libraries familiar to data analysts, data scientists or researchers who have worked with Python's pandas or R's data.frames or data.tables. It is important to note that while Spark DataFrames will be familiar to pandas or data.frames / data.tables users, there are some differences so please temper your expectations. Users with more of a SQL background can use the language to shape their data as well. Also, delivered with Apache Spark are several already implemented and tuned algorithms, statistical models, and frameworks: MLlib and ML for machine learning, GraphX and GraphFrames for graph processing, and Spark Streaming (DStreams and Structured). Spark allows the user to combine these libraries seamlessly in the same application.

Apache Spark can easily run locally on a laptop, yet can also easily be deployed in standalone mode, over YARN, or Apache Mesos - either on your local cluster or in the cloud. It can read and write from a diverse data sources including (but not limited to) HDFS, Apache Cassandra, Apache HBase, and S3:

What is Apache Spark?

Source: Apache Spark is the smartphone of Big Data http://bit.ly/1QsgaNj

Note

For more information, please refer to: Apache Spark is the Smartphone of Big Data at http://bit.ly/1QsgaNj

You have been reading a chapter from
Learning PySpark
Published in: Feb 2017
Publisher: Packt
ISBN-13: 9781786463708
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime