Chapter 1, The Fundamentals of Machine Learning, defines machine learning as the study and design of programs that improve their performance of a task by learning from experience. This definition guides the other chapters; in each, we will examine a machine learning model, apply it to a task, and measure its performance.
Chapter 2, Simple Linear Regression, discusses a model that relates a single feature to a continuous response variable. We will learn about cost functions and use the normal equation to optimize the model.
Chapter 3, Classification and Regression with K-Nearest Neighbors, introduces a simple, nonlinear model for classification and regression tasks.
Chapter 4, Feature Extraction, describes methods for representing text, images, and categorical variables as features that can be used in machine learning models.
Chapter 5, From Simple Linear Regression to Multiple Linear Regression, discusses a generalization of simple linear regression that regresses a continuous response variable onto multiple features.
Chapter 6, From Linear Regression to Logistic Regression, further generalizes multiple linear regression and introduces a model for binary classification tasks.
Chapter 7, Naive Bayes, discusses Bayes’ theorem and the Naive Bayes family of classifiers, and compares generative and discriminative models.
Chapter 8, Nonlinear Classification and Regression with Decision Trees, introduces the decision tree, a simple, nonlinear model for classification and regression tasks.
Chapter 9, From Decision Trees to Random Forests and other Ensemble Methods, discusses three methods for combining models called bagging, boosting, and stacking.
Chapter 10, The Perceptron, introduces a simple online model for binary classification.
Chapter 11, From the Perceptron to Support Vector Machines, discusses a powerful, discriminative model for classification and regression called the support vector machine, and a technique for efficiently projecting features to higher dimensional spaces.
Chapter 12, From the Perceptron to Artificial Neural Networks, introduces powerful nonlinear models for classification and regression built from graphs of artificial neurons.
Chapter 13, K-means, discusses an algorithm that can be used to find structures in unlabeled data.
Chapter 14, Dimensionality Reduction with Principal Component Analysis, describes a method for reducing the dimensions of data that can mitigate the curse of dimensionality.