Chapter 1, Data Science Using Java, provides the overview of the existing tools available in Java as well and introduces the methodology for approaching Data Science projects, CRISP-DM. In this chapter, we also introduce our running example, building a search engine.
Chapter 2, Data Processing Toolbox, reviews the standard Java library: the Collection API for storing the data in memory, the IO API for reading and writing the data, and the Streaming API for a convenient way of organizing data processing pipelines. We will look at the extensions to the standard libraries such as Apache Commons Lang, Apache Commons IO, Google Guava, and AOL Cyclops React. Then, we will cover most common ways of storing the data--text and CSV files, HTML, JSON, and SQL Databases, and discuss how we can get the data from these data sources. We will finish this chapter by talking about the ways we can collect the data for the running example--the search engine, and how we prepare the data for that.
Chapter 3, Exploratory Data Analysis, performs the initial analysis of data with Java: we look at how to calculate common statistics such as the minimal and maximal values, the average value, and the standard deviation. We also talk a bit about interactive analysis and see what are the tools that allow us to visually inspect the data before building models. For the illustration in this chapter, we use the data we collect for the search engine.
Chapter 4, Supervised Learning - Classification and Regression, starts with Machine Learning, and then looks at the models for performing supervised learning in Java. Among others, we look at how to use the following libraries--Smile, JSAT, LIBSVM, LIBLINEAR, and Encog, and we see how we can use these libraries to solve the classification and regression problems. We use two examples here, first, we use the search engine data for predicting whether a URL will appear on the first page of results or not, which we use for illustrating the classification problem. Second, we predict how much time it takes to multiply two matrices on certain hardware given its characteristics, and we illustrate the regression problem with this example.
Chapter 5, Unsupervised Learning – Clustering and Dimensionality Reduction, explores the methods for Dimensionality Reduction available in Java, and we will learn how to apply PCA and Random Projection to reduce the dimensionality of this data. This is illustrated with the hardware performance dataset from the previous chapter. We also look at different ways to cluster data, including Agglomerative Clustering, K-Means, and DBSCAN, and we use the dataset with customer complaints as an example.
Chapter 6, Working with Text – Natural Language Processing and Information Retrieval, looks at how to use text in Data Science applications, and we learn how to extract more useful features for our search engine. We also look at Apache Lucene, a library for full-text indexing and searching, and Stanford CoreNLP, a library for performing Natural Language Processing. Next, we look at how we can represent words as vectors, and we learn how to build such embeddings from co-occurrence matrices and how to use existing ones like GloVe. We also look at how we can use machine learning for texts, and we illustrate it with a sentiment analysis problem where we apply LIBLINEAR to classify if a review is positive or negative.
Chapter 7, Extreme Gradient Boosting, covers how to use XGBoost in Java and tries to apply it to two problems we had previously, classifying whether the URL appears on the first page and predicting the time to multiply two matrices. Additionally, we look at how to solve the learning-to-rank problem with XGBoost and again use our search engine example as illustration.
Chapter 8, Deep Learning with DeepLearning4j, covers Deep Neural Networks and DeepLearning4j, a library for building and training these networks in Java. In particular, we talk about Convolutional Neural Nets and see how we can use them for image recognition--predicting whether it is a picture of a dog or a cat. Additionally, we discuss data augmentation--the way to generate more data, and also mention how we can speed up the training using GPUs. We finish the chapter by describing how to rent a GPU server on Amazon AWS.
Chapter 9, Scaling Data Science, talks about big data tools available in Java, Apache Hadoop, and Apache Spark. We illustrate it by looking at how we can process Common Crawl--the copy of the Internet, and calculate TF-IDF of each document there. Additionally, we look at the graph processing tools available in Apache Spark and build a recommendation system for scientists, we recommend a coauthor for the next possible paper.
Chapter 10, Deploying Data Science Models, looks at how we can expose the models to the rest of the world in such a way they are usable. Here we cover Spring Boot and talk how we can use the search engine model we developed to rank the articles from Common Crawl. We finish by discussing the ways to evaluate the performance of the models in the online settings and talk about A/B tests and Multi-Armed Bandits.