Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Large Scale Machine Learning with Python

You're reading from   Large Scale Machine Learning with Python Learn to build powerful machine learning models quickly and deploy large-scale predictive applications

Arrow left icon
Product type Paperback
Published in Aug 2016
Publisher Packt
ISBN-13 9781785887215
Length 420 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Alberto Boschetti Alberto Boschetti
Author Profile Icon Alberto Boschetti
Alberto Boschetti
Bastiaan Sjardin Bastiaan Sjardin
Author Profile Icon Bastiaan Sjardin
Bastiaan Sjardin
Luca Massaron Luca Massaron
Author Profile Icon Luca Massaron
Luca Massaron
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. First Steps to Scalability FREE CHAPTER 2. Scalable Learning in Scikit-learn 3. Fast SVM Implementations 4. Neural Networks and Deep Learning 5. Deep Learning with TensorFlow 6. Classification and Regression Trees at Scale 7. Unsupervised Learning at Scale 8. Distributed Environments – Hadoop and Spark 9. Practical Machine Learning with Spark A. Introduction to GPUs and Theano Index

Spark


Apache Spark is an evolution of Hadoop and has become very popular in the last few years. Contrarily to Hadoop and its Java and batch-focused design, Spark is able to produce iterative algorithms in a fast and easy way. Furthermore, it has a very rich suite of APIs for multiple programming languages and natively supports many different types of data processing (machine learning, streaming, graph analysis, SQL, and so on).

Apache Spark is a cluster framework designed for quick and general-purpose processing of big data. One of the improvements in speed is given by the fact that data, after every job, is kept in-memory and not stored on the filesystem (unless you want to) as would have happened with Hadoop, MapReduce, and HDFS. This thing makes iterative jobs (such as the clustering K-means algorithm) faster and faster as the latency and bandwidth provided by the memory are more performing than the physical disk. Clusters running Spark, therefore, need a high amount of RAM memory for...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image