Chapter 24. Scaling Data Science
So far we have covered a lot of material about data science, we learned how to do both supervised and unsupervised learning in Java, how to perform text mining, use XGBoost and train Deep Neural Networks. However, most of the methods and techniques we used so far were designed to run on a single machine with the assumption that all the data will fit into memory. As you should already know, this is often the case: there are very large datasets that are not possible to process with traditional techniques on a typical hardware.Ă‚Â
In this chapter, we will see how to process such datasets--we will look at the tools that allow processing the data across several machines. We will cover two use cases: one is large scale HTML processing from Common Crawl - the copy of the Web, and another is Link Prediction for a social network.
We will cover the following topics:
- Apache Hadoop MapReduce
- Common Crawl processing
- Apache SparkĂ‚Â
- Link prediction
- Spark GraphFrame and MLlib libraries...