Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Scala Data Analysis Cookbook (new)

You're reading from   Scala Data Analysis Cookbook (new) Navigate the world of data analysis, visualization, and machine learning with over 100 hands-on Scala recipes

Arrow left icon
Product type Paperback
Published in Oct 2015
Publisher
ISBN-13 9781784396749
Length 254 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Arun Manivannan Arun Manivannan
Author Profile Icon Arun Manivannan
Arun Manivannan
Arrow right icon
View More author details
Toc

Table of Contents (9) Chapters Close

Preface 1. Getting Started with Breeze FREE CHAPTER 2. Getting Started with Apache Spark DataFrames 3. Loading and Preparing Data – DataFrame 4. Data Visualization 5. Learning from Data 6. Scaling Up 7. Going Further Index

Submitting jobs to the Spark cluster (local)


There are multiple components involved in running Spark in distributed mode. In the self-contained application mode (the main program that we have run throughout this book so far), all of these components run on a single JVM. The following diagram elaborates the various components and their functions in running the Scala program in distributed mode:

As a first step, the RDD graph that we construct using the various operations on our RDD (map, filter, join, and so on) is passed to the Directed Acyclic Graph (DAG) scheduler. The DAG scheduler optimizes the flow and converts all RDD operations into groups of tasks called stages. Generally, all tasks before a shuffle are wrapped into a stage. Consider operations in which there is a one-to-one mapping between tasks; for example, a map or filter operator yields one output for every input. If there is a map on an element on RDD followed by a filter, they are generally pipelined (the map and the filter...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime