Summary
In this chapter, we looked at running jobs on big data. By most standards, our dataset is actually quite small—only a few hundred megabytes. Many industrial datasets are much bigger, so extra processing power is needed to perform the computation. In addition, the algorithms we used can be optimized for different tasks to further increase the scalability.
Our approach extracted word frequencies from blog posts, in order to predict the gender of the author of a document. We extracted the blogs and word frequencies using MapReduce-based projects in mrjob. With those extracted, we can then perform a Naive Bayes-esque computation to predict the gender of a new document.
We only scratched the surface of what you can do with MapReduce, and we did not even use it to its full potential for this application. To take the lessons further, convert the prediction function to a MapReduce job. That is, you train the model on MapReduce to obtain a model, and you run the model with MapReduce to get...