Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Apache Spark Machine Learning Blueprints

You're reading from   Apache Spark Machine Learning Blueprints Develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide

Arrow left icon
Product type Paperback
Published in May 2016
Publisher Packt
ISBN-13 9781785880391
Length 252 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Alex Liu Alex Liu
Author Profile Icon Alex Liu
Alex Liu
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Spark for Machine Learning FREE CHAPTER 2. Data Preparation for Spark ML 3. A Holistic View on Spark 4. Fraud Detection on Spark 5. Risk Scoring on Spark 6. Churn Prediction on Spark 7. Recommendations on Spark 8. Learning Analytics on Spark 9. City Analytics on Spark 10. Learning Telco Data on Spark 11. Modeling Open Data on Spark Index

Spark computing for machine learning

With its innovations on RDD and in-memory processing, Apache Spark has truly made distributed computing easily accessible to data scientists and machine learning professionals. According to the Apache Spark team, Apache Spark runs on the Mesos cluster manager, letting it share resources with Hadoop and other applications. Therefore, Apache Spark can read from any Hadoop input source like HDFS.

Spark computing for machine learning

For the above, the Apache Spark computing model is very suitable to distributed computing for machine learning. Especially for rapid interactive machine learning, parallel computing, and complicated modelling at scale, Apache Spark should definitely be utilized.

According to the Spark development team, Spark's philosophy is to make life easy and productive for data scientists and machine learning professionals. Due to this, Apache Spark has:

  • Well documented, expressive API's
  • Powerful domain specific libraries
  • Easy integration with storage systems
  • Caching to avoid data movement

Per the introduction by Patrick Wendell, co-founder of Databricks, Spark is especially made for large scale data processing. Apache Spark supports agile data science to iterate rapidly, and Spark can be integrated with IBM and other solutions easily.

You have been reading a chapter from
Apache Spark Machine Learning Blueprints
Published in: May 2016
Publisher: Packt
ISBN-13: 9781785880391
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image