Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Spark for Data Science

You're reading from   Mastering Spark for Data Science Lightning fast and scalable data science solutions

Arrow left icon
Product type Paperback
Published in Mar 2017
Publisher Packt
ISBN-13 9781785882142
Length 560 pages
Edition 1st Edition
Arrow right icon
Authors (5):
Arrow left icon
David George David George
Author Profile Icon David George
David George
Matthew Hallett Matthew Hallett
Author Profile Icon Matthew Hallett
Matthew Hallett
Antoine Amend Antoine Amend
Author Profile Icon Antoine Amend
Antoine Amend
Andrew Morgan Andrew Morgan
Author Profile Icon Andrew Morgan
Andrew Morgan
Albert Bifet Albert Bifet
Author Profile Icon Albert Bifet
Albert Bifet
+1 more Show less
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. The Big Data Science Ecosystem 2. Data Acquisition FREE CHAPTER 3. Input Formats and Schema 4. Exploratory Data Analysis 5. Spark for Geographic Analysis 6. Scraping Link-Based External Data 7. Building Communities 8. Building a Recommendation System 9. News Dictionary and Real-Time Tagging System 10. Story De-duplication and Mutation 11. Anomaly Detection on Sentiment Analysis 12. TrendCalculus 13. Secure Data 14. Scalable Algorithms

Overall architecture

Let's start with a high-level introduction to data architectures: what they do, why they're useful, when they should be used, and how Apache Spark fits in.

Overall architecture

At their most general, modern data architectures have four basic characteristics:

  • Data Ingestion
  • Data Lake
  • Data Science
  • Data Access

Let's introduce each of these now, so that we can go into more detail in the later chapters.

Data Ingestion

Traditionally, data is ingested under strict rules and formatted according to a predetermined schema. This process is known as Extract, Transform, Load (ETL), and is still a very common practice supported by a large array of commercial tools as well as some open source products.

Data Ingestion

The ETL approach favors performing up-front checks, which ensure data quality and schema conformance, in order to simplify follow-on online analytical processing. It is particularly suited to handling data with a specific set of characteristics, namely, those that relate to a classical entity-relationship model. However, it is not suitable for all scenarios.

During the big data revolution, there was a metaphorical explosion of demand for structured, semi-structured, and unstructured data, leading to the creation of systems that were required to handle data with a different set of characteristics. These came to be defined by the phrase, 4 Vs: Volume, Variety, Velocity, and Veracity http://www.ibmbigdatahub.com/infographic/four-vs-big-data. While traditional ETL methods floundered under this new burden-because they simply required too much time to process the vast quantities of data, or were too rigid in the face of change, a different approach emerged. Enter the schema-on-read paradigm. Here, data is ingested in its original form (or at least very close to) and the details of normalization, validation, and so on are done at the time of analytical processing.

This is typically referred to as Extract Load Transform (ELT), a reference to the traditional approach:

Data Ingestion

This approach values the delivery of data in a timely fashion, delaying the detailed processing until it is absolutely required. In this way, a data scientist can gain access to the data immediately, searching for insight using a range of techniques not available with a traditional approach.

Although we only provide a high-level overview here, this approach is so important that throughout the book we will explore further by implementing various schema-on-read algorithms. We will assume the ELT method for data ingestion, that is to say we encourage the loading of data at the user's convenience. This may be every n minute, overnight or during times of low usage. The data can then be checked for integrity, quality, and so forth by running batch processing jobs offline, again at the user's discretion.

Data Lake

A data lake is a convenient, ubiquitous store of data. It is useful because it provides a number of key benefits, primarily:

  • Reliable storage
  • Scalable data processing capability

Let's take a brief look at each of these.

Reliable storage

There is a good choice of underlying storage implementations for a data lake, these include Hadoop Distributed File System (HDFS), MapR-FS, and Amazon AWS S3.

Throughout the book, HDFS will be the assumed storage implementation. Also, in this book the authors use a distributed Spark setup, deployed on Yet Another Resource Negotiator (YARN) running inside a Hortonworks HDP environment. Therefore, HDFS is the technology used, unless otherwise stated. If you are not familiar with any of these technologies, they are discussed further on in this chapter.

In any case, it's worth knowing that Spark references HDFS locations natively, accesses local file locations via the prefix file:// and references S3 locations via the prefix s3a://.

Scalable data processing capability

Clearly, Apache Spark will be our data processing platform of choice. In addition, as you may recall, Spark allows the user to execute code in their preferred environment, be that local, standalone, YARN or Mesos, by configuring the appropriate cluster manager; in masterURL. Incidentally, this can be done in any one of the three locations:

  • Using the --master option when issuing the spark-submit command
  • Adding the spark.master property in the conf/spark-defaults.conf file
  • Invoking the setMaster method on the SparkConf object

If you're not familiar with HDFS, or if you do not have access to a cluster, then you can run a local Spark instance using the local filesystem, which is useful for testing. However, beware that there are often bad behaviors that only appear when executing on a cluster. So, if you're serious about Spark, it's worth investing in a distributed cluster manager why not try Spark standalone cluster mode, or Amazon AWS EMR? For example, Amazon offers a number of affordable paths to cloud computing, you can explore the idea of spot instances at https://aws.amazon.com/ec2/spot/.

Data science platform

A data science platform provides services and APIs that enable effective data science to take place, including explorative data analysis, machine learning model creation and refinement, image and audio processing, natural language processing, and text sentiment analysis.

This is the area where Spark really excels and forms the primary focus of the remainder of this book, exploiting a robust set of native machine learning libraries, unsurpassed parallel graph processing capabilities and a strong community. Spark provides truly scalable opportunities for data science.

The remaining chapters will provide insight into each of these areas, including Chapter 6, Scraping Link-Based External Data, Chapter 7, Building Communities, and Chapter 8, Building a Recommendation System.

Data Access

Data in a data lake is most frequently accessed by data engineers and scientists using the Hadoop ecosystem tools, such as Apache Spark, Pig, Hive, Impala, or Drill. However, there are times when other users, or even other systems, need access to the data and the normal tools are either too technical or do not meet the demanding expectations of the user in terms of real-world latency.

In these circumstances, the data often needs to be copied into data marts or index stores so that it may be exposed to more traditional methods, such as a report or dashboard. This process, which typically involves creating indexes and restructuring data for low-latency access, is known as data egress.

Fortunately, Apache Spark has a wide variety of adapters and connectors into traditional databases, BI tools, and visualization and reporting software. Many of these will be introduced throughout the book.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime