Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Spark for Data Science
Spark for Data Science

Spark for Data Science: Analyze your data and delve deep into the world of machine learning with the latest Spark version, 2.0

Arrow left icon
Profile Icon Duvvuri Profile Icon Singhal
Arrow right icon
£28.99 £32.99
eBook Sep 2016 344 pages 1st Edition
eBook
£28.99 £32.99
Paperback
£41.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Duvvuri Profile Icon Singhal
Arrow right icon
£28.99 £32.99
eBook Sep 2016 344 pages 1st Edition
eBook
£28.99 £32.99
Paperback
£41.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
£28.99 £32.99
Paperback
£41.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Spark for Data Science

Chapter 2. The Spark Programming Model

Large-scale data processing using thousands of nodes with built-in fault tolerance has become widespread due to the availability of open source frameworks, with Hadoop being a popular choice. These frameworks are quite successful in executing specific tasks such as Extract, Transform, and Load (ETL) and storage applications that deal with web-scale data. However, developers were left with a myriad of tools to work with, along with the well-established Hadoop ecosystem. There was a need for a single, general-purpose development platform that caters to batch, streaming, interactive, and iterative requirements. This was the motivation behind Spark.

The previous chapter outlined the big data analytics challenges and how Spark addressed most of them at a very high level. In this chapter, we will examine the design goals and choices involved in the making of Spark to get a clearer understanding of its suitability as a data science platform for big...

The programming paradigm

For Spark to address the big data challenges and serve as a platform for data science and other scalable applications, it was built with well-thought-out design considerations and language support.

There are Spark APIs designed for varieties of application developers to create Spark-based applications using standard API interfaces. Spark provides APIs for Scala, Java, R and Python programming languages, as explained in the following sections.

Supported programming languages

With built-in support for so many languages, Spark can be used interactively through a shell, which is otherwise known as Read-Evaluate-Print-Loop (REPL), in a way that will feel familiar to developers of any language. The developers can use the language of their choice, leverage existing libraries, and seamlessly interact with Spark and its ecosystem. Let us see the ones supported on Spark and how they fit into the Spark ecosystem.

Scala

Spark itself is written in Scala, a Java Virtual Machine ...

The Spark engine

To program with Spark, a basic understanding of Spark components is needed. In this section, some of the important Spark components along with their execution mechanism will be explained so that developers and data scientists can write programs and build applications.

Before getting into the details, we suggest you take a look at the following diagram so that the descriptions of the Spark gears are more comprehensible as you read further:

The Spark engine

Driver program

The Spark shell is an example of a driver program. A driver program is a process that executes in the JVM and runs the user's main function on it. It has a SparkContext object which is a connection to the underlying cluster manager. A Spark application is initiated when the driver starts and it completes when the driver stops. The driver, through an instance of SparkContext, coordinates all processes within a Spark application.

Primarily, an RDD lineage Directed Acyclic Graph (DAG) is built on the driver side with data...

The RDD API

The RDD is a read-only, partitioned, fault-tolerant collection of records. From a design perspective, there was a need for a single data structure abstraction that hides the complexity of dealing with a wide variety of data sources, be it HDFS, filesystems, RDBMS, NOSQL data structures, or any other data source. The user should be able to define the RDD from any of these sources. The goal was to support a wide array of operations and let users compose them in any order.

RDD basics

Each dataset is represented as an object in Spark's programming interface called RDD. Spark provides two ways for creating RDDs. One way is to parallelize an existing collection. The other way is to reference a dataset in an external storage system such as a filesystem.

An RDD is composed of one or more data sources, maybe after performing a series of transformations including several operators. Every RDD or RDD partition knows how to recreate itself in case of failure. It has the log of transformations...

RDD operations

Spark programming usually starts by choosing a suitable interface that you are comfortable with. If you intend to do interactive data analysis, then a shell prompt would be the obvious choice. However, choosing a Python shell (PySpark) or Scala shell (Spark-Shell) depends on your proficiency with these languages to some extent. If you are building a full-blown scalable application then proficiency matters a great deal, so you should develop the application in your language of choice between Scala, Java, and Python, and submit it to Spark. We will discuss this aspect in more detail later in the book.

Creating RDDs

In this section, we will use both a Python shell (PySpark) and a Scala shell (Spark-Shell) to create an RDD. Both of these shells have a predefined, interpreter-aware SparkContext that is assigned to a variable sc.

Let us get started with some simple code examples. Note that the code assumes the current working directory is Spark's home directory. The following...

The programming paradigm


For Spark to address the big data challenges and serve as a platform for data science and other scalable applications, it was built with well-thought-out design considerations and language support.

There are Spark APIs designed for varieties of application developers to create Spark-based applications using standard API interfaces. Spark provides APIs for Scala, Java, R and Python programming languages, as explained in the following sections.

Supported programming languages

With built-in support for so many languages, Spark can be used interactively through a shell, which is otherwise known as Read-Evaluate-Print-Loop (REPL), in a way that will feel familiar to developers of any language. The developers can use the language of their choice, leverage existing libraries, and seamlessly interact with Spark and its ecosystem. Let us see the ones supported on Spark and how they fit into the Spark ecosystem.

Scala

Spark itself is written in Scala, a Java Virtual Machine (JVM...

The Spark engine


To program with Spark, a basic understanding of Spark components is needed. In this section, some of the important Spark components along with their execution mechanism will be explained so that developers and data scientists can write programs and build applications.

Before getting into the details, we suggest you take a look at the following diagram so that the descriptions of the Spark gears are more comprehensible as you read further:

Driver program

The Spark shell is an example of a driver program. A driver program is a process that executes in the JVM and runs the user's main function on it. It has a SparkContext object which is a connection to the underlying cluster manager. A Spark application is initiated when the driver starts and it completes when the driver stops. The driver, through an instance of SparkContext, coordinates all processes within a Spark application.

Primarily, an RDD lineage Directed Acyclic Graph (DAG) is built on the driver side with data sources...

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Perform data analysis and build predictive models on huge datasets that leverage Apache Spark
  • Learn to integrate data science algorithms and techniques with the fast and scalable computing features of Spark to address big data challenges
  • Work through practical examples on real-world problems with sample code snippets

Description

This is the era of Big Data. The words ‘Big Data’ implies big innovation and enables a competitive advantage for businesses. Apache Spark was designed to perform Big Data analytics at scale, and so Spark is equipped with the necessary algorithms and supports multiple programming languages. Whether you are a technologist, a data scientist, or a beginner to Big Data analytics, this book will provide you with all the skills necessary to perform statistical data analysis, data visualization, predictive modeling, and build scalable data products or solutions using Python, Scala, and R. With ample case studies and real-world examples, Spark for Data Science will help you ensure the successful execution of your data science projects.

Who is this book for?

This book is for anyone who wants to leverage Apache Spark for data science and machine learning. If you are a technologist who wants to expand your knowledge to perform data science operations in Spark, or a data scientist who wants to understand how algorithms are implemented in Spark, or a newbie with minimal development experience who wants to learn about Big Data Analytics, this book is for you!

What you will learn

  • Consolidate, clean, and transform your data acquired from various data sources
  • Perform statistical analysis of data to find hidden insights
  • Explore graphical techniques to see what your data looks like
  • Use machine learning techniques to build predictive models
  • Build scalable data products and solutions
  • Start programming using the RDD, DataFrame and Dataset APIs
  • Become an expert by improving your data analytical skills

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 30, 2016
Length: 344 pages
Edition : 1st
Language : English
ISBN-13 : 9781785884771
Category :
Languages :
Concepts :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Sep 30, 2016
Length: 344 pages
Edition : 1st
Language : English
ISBN-13 : 9781785884771
Category :
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just £5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just £5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total £ 107.97
Apache Spark Machine Learning Blueprints
£32.99
Fast Data Processing with Spark 2
£32.99
Spark for Data Science
£41.99
Total £ 107.97 Stars icon

Table of Contents

11 Chapters
1. Big Data and Data Science – An Introduction Chevron down icon Chevron up icon
2. The Spark Programming Model Chevron down icon Chevron up icon
3. Introduction to DataFrames Chevron down icon Chevron up icon
4. Unified Data Access Chevron down icon Chevron up icon
5. Data Analysis on Spark Chevron down icon Chevron up icon
6. Machine Learning Chevron down icon Chevron up icon
7. Extending Spark with SparkR Chevron down icon Chevron up icon
8. Analyzing Unstructured Data Chevron down icon Chevron up icon
9. Visualizing Big Data Chevron down icon Chevron up icon
10. Putting It All Together Chevron down icon Chevron up icon
11. Building Data Science Applications Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.