Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Spark for Data Science
Spark for Data Science

Spark for Data Science: Analyze your data and delve deep into the world of machine learning with the latest Spark version, 2.0

Arrow left icon
Profile Icon Duvvuri Profile Icon Singhal
Arrow right icon
$9.99 $43.99
eBook Sep 2016 344 pages 1st Edition
eBook
$9.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Duvvuri Profile Icon Singhal
Arrow right icon
$9.99 $43.99
eBook Sep 2016 344 pages 1st Edition
eBook
$9.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$9.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Spark for Data Science

Chapter 1.  Big Data and Data Science – An Introduction

Big data is definitely a big deal! It promises a wealth of opportunities by deriving hidden insights in huge data silos and by opening new avenues to excel in business. Leveraging big data through advanced analytics techniques has become a no-brainer for organizations to create and maintain their competitive advantage.

This chapter explains what big data is all about, the various challenges with big data analysis and how Apache Spark pitches in as the de facto standard to address computational challenges and also serves as a data science platform.

The topics covered in this chapter are as follows:

  • Big data overview - what is all the fuss about?
  • Challenges with big data analytics - why was it so difficult?
  • Evolution of big data analytics - the data analytics trend
  • Spark for data analytics - the solution to big data challenges
  • The Spark stack - all that makes it up for a complete big data solution

Big data overview

Much has already been spoken and written about what big data is, but there is no specific standard as such to clearly define it. It is actually a relative term to some extent. Whether small or big, your data can be leveraged only if you can analyze it properly. To make some sense out of your data, the right set of analysis techniques is needed and selecting the right tools and techniques is of utmost importance in data analytics. However, when the data itself becomes a part of the problem and the computational challenges need to be addressed prior to performing data analysis, it becomes a big data problem.

A revolution took place in the World Wide Web, also referred to as Web 2.0, which changed the way people used the Internet. Static web pages became interactive websites and started collecting more and more data. Technological advancements in cloud computing, social media, and mobile computing created an explosion of data. Every digital device started emitting data and many other sources started driving the data deluge. The dataflow from every nook and corner generated varieties of voluminous data, at speed! The formation of big data in this fashion was a natural phenomenon, because this is how the World Wide Web had evolved and no explicit efforts were involved in specifics. This is about the past! If you consider the change that is happening now, and is going to happen in future, the volume and speed of data generation is beyond what one can anticipate. I am propelled to make such a statement because every device is getting smarter these days, thanks to the Internet of Things (IoT).

The IT trend was such that the technological advancements also facilitated the data explosion. Data storage had experienced a paradigm shift with the advent of cheaper clusters of online storage pools and the availability of commodity hardware with bare minimal price. Storing data from disparate sources in its native form in a single data lake was rapidly gaining over carefully designed data marts and data warehouses. Usage patterns also shifted from rigid schema-driven, RDBMS-based approaches to schema-less, continuously available NoSQL data-store-driven solutions. As a result, the rate of data creation, whether structured, semi-structured, or unstructured, started accelerating like never before.

Organizations are very much convinced that not only can specific business questions be answered by leveraging big data; it also brings in opportunities to cover the uncovered possibilities in businesses and address the uncertainties associated with this. So, apart from the natural data influx, organizations started devising strategies to generate more and more data to maintain their competitive advantages and to be future ready. Here, an example would help to understand this better. Imagine sensors are installed on the machines of a manufacturing plant which are constantly emitting data, and hence the status of the machine parts, and a company is able to predict when the machine is going to fail. It lets the company prevent a failure or damage and avoid unplanned downtime, saving a lot of money.

Challenges with big data analytics

There are broadly two types of formidable challenges in the analysis of big data. The first challenge is the requirement for a massive computation platform, and once it is in place, the second challenge is to analyze and make sense out of huge data at scale.

Computational challenges

With the increase in data, the storage requirement for big data also grew more and more. Data management became a cumbersome task. The latency involved in accessing the disk storage due to the seek time became the major bottleneck even though the processing speed of the processor and the frequency of RAM were up to the mark.

Fetching structured and unstructured data from across the gamut of business applications and data silos, consolidating them, and processing them to find useful business insights was challenging. There were only a few applications that could address any one area, or just a few areas of diversified business requirement. However, integrating those applications to address most of the business requirements in a unified way only increased the complexity.

To address these challenges, people turned to the distributed computing framework with distributed file system, for example, Hadoop and Hadoop Distributed File System (HDFS). This could eliminate the latency due to disk I/O, as the data could be read in parallel across the cluster of machines.

Distributed computing technologies had existed for decades before, but gained more prominence only after the importance of big data was realized in the industry. So, technology platforms such as Hadoop and HDFS or Amazon S3 became the industry standard. On top of Hadoop, many other solutions such as Pig, Hive, Sqoop, and others were developed to address different kinds of industry requirements such as storage, Extract, Transform, and Load (ETL), and data integration to make Hadoop a unified platform.

Analytical challenges

Analyzing data to find some hidden insights has always been challenging because of the additional intricacies involved in dealing with huge datasets. The traditional BI and OLAP solutions could not address most of the challenges that arose due to big data. As an example, if there were multiple dimensions to a dataset, say 100, it got really difficult to compare these variables with one another to draw a conclusion because there would be around 100C2 combinations for it. Such cases required statistical techniques such as correlation and the like to find the hidden patterns.

Though there were statistical solutions to many problems, it got really difficult for data scientists or analytics professionals to slice and dice the data to find intelligent insights unless they loaded the entire dataset into a DataFrame in memory. The major roadblock was that most of the general-purpose algorithms for statistical analysis and machine learning were single-threaded and written at a time when datasets were usually not so huge and could fit in the RAM on a single computer. Those algorithms written in R or Python were no longer very useful in their native form to be deployed on a distributed computing environment because of the limitation of in-memory computation.

To address this challenge, statisticians and computer scientists had to work together to rewrite most of the algorithms that would work well in a distributed computing environment. Consequently, a library called Mahout for machine learning algorithms was developed on Hadoop for parallel processing. It had most of the common algorithms that were being used most often in the industry. Similar initiatives were taken for other distributed computing frameworks.

Evolution of big data analytics

The previous section outlined how the computational and data analytics challenges were addressed for big data requirements. It was possible because of the convergence of several related trends such as low-cost commodity hardware, accessibility to big data, and improved data analytics techniques. Hadoop became a cornerstone in many large, distributed data processing infrastructures.

However, people soon started realizing the limitations of Hadoop. Hadoop solutions were best suited for only specific types of big data requirements such as ETL; it gained popularity for such requirements only.

There were scenarios when data engineers or analysts had to perform ad hoc queries on the data sets for interactive data analysis. Every time they ran a query on Hadoop, the data was read from the disk (HDFS-read) and loaded into the memory - which was a costly affair. Effectively, jobs were running at the speed of I/O transfers over the network and cluster of disks, instead of the speed of CPU and RAM.

The following is a pictorial representation of the scenario:

Evolution of big data analytics

One more case where Hadoop's MapReduce model could not fit in well was with machine learning algorithms that were iterative in nature. Hadoop MapReduce was underperforming, with huge latency in iterative computation. Since MapReduce had a restricted programming model with forbidden communication between Map and Reduce workers, the intermediate results needed to be stored in a stable storage. So, those were pushed on to the HDFS, which in turn writes into the instead of saving in RAM and then loading back in the memory for the subsequent iteration, similarly for the rest of the iterations. The number of disk I/O was dependent on the number of iterations involved in an algorithm and this was topped with the serialization and deserialization overhead while saving and loading the data. Overall, it was computationally expensive and could not get the level of popularity compared to what was expected of it.

The following is a pictorial representation of this scenario:

Evolution of big data analytics

To address this, tailor-made solutions were developed, for example, Google's Pregel, which was an iterative graph processing algorithm and was optimized for inter-process communication and in-memory storage for the intermediate results to make it run faster. Similarly, many other solutions were developed or redesigned that would best suit some of the specific needs that the algorithms used were designed for.

Instead of redesigning all the algorithms, a general-purpose engine was needed that could be leveraged by most of the algorithms for in-memory computation on a distributed computing platform. It was also expected that such a design would result in faster execution of iterative computation and ad hoc data analysis. This is how the Spark project paved its way out at the AMPLab at UC Berkeley.

Spark for data analytics

Soon after the Spark project was successful in the AMP labs, it was made open source in 2010 and transferred to the Apache Software Foundation in 2013. It is currently being led by Databricks.

Spark offers many distinct advantages over other distributed computing platforms, such as:

  • A faster execution platform for both iterative machine learning and interactive data analysis
  • Single stack for batch processing, SQL queries, real-time stream processing, graph processing, and complex data analytics
  • Provides high-level API to develop a diverse range of distributed applications by hiding the complexities of distributed programming
  • Seamless support for various data sources such as RDBMS, HBase, Cassandra, Parquet, MongoDB, HDFS, Amazon S3, and so on

Spark for data analytics

The following is a pictorial representation of in-memory data sharing for iterative algorithms:

Spark for data analytics

Spark hides the complexities in writing the core MapReduce jobs and provides most of the functionalities through simple function calls. Because of its simplicity, it is able to cater to wider and bigger audience groups such as data scientists, data engineers, statisticians, and R/Python/Scala/Java developers.

The Spark architecture broadly consists of a data storage layer, management framework, and API. It is designed to work on top of an HDFS filesystem, and thereby leverages the existing ecosystem. Deployment could be as a standalone server or on distributed computing frameworks such as Apache Mesos or YARN. An API is provided for Scala, the language in which Spark is written, along with Java, R and Python.

The Spark stack

Spark is a general-purpose cluster computing system that empowers other higher-level components to leverage its core engine. It is interoperable with Apache Hadoop, in the sense that it can read and write data from/to HDFS and can also integrate with other storage systems that are supported by the Hadoop API.

While it allows building other higher-level applications on top of it, it already has a few components built on top that are tightly integrated with its core engine to take advantage of the future enhancements at the core. These applications come bundled with Spark to cover the broader sets of requirements in the industry. Most of the real-world applications need to be integrated across projects to solve specific business problems that usually have a set of requirements. This is eased out with Apache Spark as it allows its higher level components to be seamlessly integrated, such as libraries in a development project.

Also, with Spark's built-in support for Scala, Java, R and Python, a broader range of developers and data engineers are able to leverage the entire Spark stack:

The Spark stack

Spark core

The Spark core, in a way, is similar to the kernel of an operating system. It is the general execution engine, which is fast as well as fault tolerant. The entire Spark ecosystem is built on top of this core engine. It is mainly designed to do job scheduling, task distribution, and monitoring of jobs across worker nodes. It is also responsible for memory management, interacting with various heterogeneous storage systems, and various other operations.

The primary building block of Spark core is the Resilient Distributed Dataset (RDD), which is an immutable, fault-tolerant collection of elements. Spark can create RDDs from a variety of data sources such as HDFS, local filesystems, Amazon S3, other RDDs, NoSQL data stores such as Cassandra, and so on. They are resilient in the sense that they automatically rebuild on failure. RDDs are built through lazy parallel transformations. They may be cached and partitioned, and may or may not be materialized.

The entire Spark core engine may be viewed as a set of simple operations on distributed datasets. All the scheduling and execution of jobs in Spark is done based on the methods associated with each RDD. Also, the methods associated with each RDD define their own ways of distributed in-memory computation.

Spark SQL

This module of Spark is designed to query, analyze, and perform operations on structured data. This is a very important component in the entire Spark stack because of the fact that most of the organizational data is structured, though unstructured data is growing rapidly. Acting as a distributed query engine, it enables Hadoop Hive queries to run up to 100 times faster on it without any modification. Apart from Hive, it also supports Apache Parquet, an efficient columnar storage, JSON, and other structured data formats. Spark SQL enables running SQL queries along with complex programs written in Python, Scala, and Java.

Spark SQL provides a distributed programming abstraction called DataFrames, referred to as SchemaRDD before, which had fewer functions associated with it. DataFrames are distributed collections of named columns, analogous to SQL tables or Python's Pandas DataFrames. They can be constructed with a variety of data sources that have schemas with them such as Hive, Parquet, JSON, other RDBMS sources, and also from Spark RDDs.

Spark SQL can be used for ETL processing across different formats and then running ad hoc analysis. Spark SQL comes with an optimizer framework called Catalyst that can transform SQL queries for better efficiency.

Spark streaming

The processing window for the enterprise data is becoming shorter than ever. To address the real-time processing requirement of the industry, this component of Spark was designed, which is fault tolerant as well as scalable. Spark enables real-time data analytics on live streams of data by supporting data analysis, machine learning, and graph processing on them.

It provides an API called Discretised Stream (DStream) to manipulate the live streams of data. The live streams of data are sliced up into small batches of, say, x seconds. Spark treats each batch as an RDD and processes them as basic RDD operations. DStreams can be created out of live streams of data from HDFS, Kafka, Flume, or any other source which can stream data on the TCP socket. By applying some higher-level operations on DStreams, other DStreams can be produced.

The final result of Spark streaming can either be written back to the various data stores supported by Spark or can be pushed to any dashboard for visualization.

MLlib

MLlib is the built-in machine learning library in the Spark stack. This was introduced in Spark 0.8. Its goal is to make machine learning scalable and easy. Developers can seamlessly use Spark SQL, Spark Streaming, and GraphX in their programming language of choice, be it Java, Python, or Scala. MLlib provides the necessary functions to perform various statistical analyses such as correlations, sampling, hypothesis testing, and so on. This component also has a broad coverage of applications and algorithms in classification, regression, collaborative filtering, clustering, and decomposition.

The machine learning workflow involves collecting and preprocessing data, building and deploying the model, evaluating the results, and refining the model. In the real world, the preprocessing steps take up significant effort. These are typically multi-stage workflows involving expensive intermediate read/write operations. Often, these processing steps may be performed multiple times over a period of time. A new concept called ML Pipelines was introduced to streamline these preprocessing steps. A Pipeline is a sequence of transformations where the output of one stage is the input of another, forming a chain. The ML Pipeline leverages Spark and MLlib and enables developers to define reusable sequences of transformations.

GraphX

GraphX is a thin-layered unified graph analytics framework on Spark. It was designed to be a general-purpose distributed dataflow framework in place of specialized graph processing frameworks. It is fault tolerant and also exploits in-memory computation.

GraphX is an embedded graph processing API for manipulating graphs (for example, social networks) and to do graph parallel computation (for example, Google's Pregel). It combines the advantages of both graph-parallel and data-parallel systems on the Spark stack to unify exploratory data analysis, iterative graph computation, and ETL processing. It extends the RDD abstraction to introduce the Resilient Distributed Graph (RDG), which is a directed graph with properties associated to each of its vertices and edges.

GraphX includes a decently large collection of graph algorithms, such as PageRank, K-Core, Triangle Count, LDA, and so on.

SparkR

The SparkR project was started to integrate the statistical analysis and machine learning capability of R with the scalability of Spark. It addressed the limitation of R, which was its ability to process as much data as fitted in the memory of a single machine. R programs can now scale in a distributed setting through SparkR.

SparkR is actually an R Package that provides an R shell to leverage Spark's distributed computing engine. With R's rich set of built-in packages for data analytics, data scientists can analyze large datasets interactively at scale.

Summary

In this chapter, we briefly covered what big data is all about. We then discussed the computational and analytical challenges involved in big data analytics. Later, we looked at how the analytics space in the context of big data has evolved over a period of time and what the trend has been. We also covered how Spark addressed most of the big data analytics challenges and became a general-purpose unified analytics platform for data science as well as parallel computation. At the end of this chapter, we just gave you a heads-up on the Spark stack and its components.

In the next chapter, we will learn about the Spark programming model. We will take a deep dive into the basic building block of Spark, which is the RDD. Also, we will learn how to program with the RDD API on Scala and Python.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Perform data analysis and build predictive models on huge datasets that leverage Apache Spark
  • Learn to integrate data science algorithms and techniques with the fast and scalable computing features of Spark to address big data challenges
  • Work through practical examples on real-world problems with sample code snippets

Description

This is the era of Big Data. The words ‘Big Data’ implies big innovation and enables a competitive advantage for businesses. Apache Spark was designed to perform Big Data analytics at scale, and so Spark is equipped with the necessary algorithms and supports multiple programming languages. Whether you are a technologist, a data scientist, or a beginner to Big Data analytics, this book will provide you with all the skills necessary to perform statistical data analysis, data visualization, predictive modeling, and build scalable data products or solutions using Python, Scala, and R. With ample case studies and real-world examples, Spark for Data Science will help you ensure the successful execution of your data science projects.

Who is this book for?

This book is for anyone who wants to leverage Apache Spark for data science and machine learning. If you are a technologist who wants to expand your knowledge to perform data science operations in Spark, or a data scientist who wants to understand how algorithms are implemented in Spark, or a newbie with minimal development experience who wants to learn about Big Data Analytics, this book is for you!

What you will learn

  • Consolidate, clean, and transform your data acquired from various data sources
  • Perform statistical analysis of data to find hidden insights
  • Explore graphical techniques to see what your data looks like
  • Use machine learning techniques to build predictive models
  • Build scalable data products and solutions
  • Start programming using the RDD, DataFrame and Dataset APIs
  • Become an expert by improving your data analytical skills

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 30, 2016
Length: 344 pages
Edition : 1st
Language : English
ISBN-13 : 9781785884771
Category :
Languages :
Concepts :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Sep 30, 2016
Length: 344 pages
Edition : 1st
Language : English
ISBN-13 : 9781785884771
Category :
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 142.97
Spark for Data Science
$54.99
Fast Data Processing with Spark 2
$43.99
Apache Spark Machine Learning Blueprints
$43.99
Total $ 142.97 Stars icon
Banner background image

Table of Contents

11 Chapters
1. Big Data and Data Science – An Introduction Chevron down icon Chevron up icon
2. The Spark Programming Model Chevron down icon Chevron up icon
3. Introduction to DataFrames Chevron down icon Chevron up icon
4. Unified Data Access Chevron down icon Chevron up icon
5. Data Analysis on Spark Chevron down icon Chevron up icon
6. Machine Learning Chevron down icon Chevron up icon
7. Extending Spark with SparkR Chevron down icon Chevron up icon
8. Analyzing Unstructured Data Chevron down icon Chevron up icon
9. Visualizing Big Data Chevron down icon Chevron up icon
10. Putting It All Together Chevron down icon Chevron up icon
11. Building Data Science Applications Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.