Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Machine Learning Algorithms

You're reading from   Mastering Machine Learning Algorithms Expert techniques for implementing popular machine learning algorithms, fine-tuning your models, and understanding how they work

Arrow left icon
Product type Paperback
Published in Jan 2020
Publisher Packt
ISBN-13 9781838820299
Length 798 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Giuseppe Bonaccorso Giuseppe Bonaccorso
Author Profile Icon Giuseppe Bonaccorso
Giuseppe Bonaccorso
Giuseppe Bonaccorso Giuseppe Bonaccorso
Author Profile Icon Giuseppe Bonaccorso
Giuseppe Bonaccorso
Arrow right icon
View More author details
Toc

Table of Contents (28) Chapters Close

Preface 1. Machine Learning Model Fundamentals 2. Loss Functions and Regularization FREE CHAPTER 3. Introduction to Semi-Supervised Learning 4. Advanced Semi-Supervised Classification 5. Graph-Based Semi-Supervised Learning 6. Clustering and Unsupervised Models 7. Advanced Clustering and Unsupervised Models 8. Clustering and Unsupervised Models for Marketing 9. Generalized Linear Models and Regression 10. Introduction to Time-Series Analysis 11. Bayesian Networks and Hidden Markov Models 12. The EM Algorithm 13. Component Analysis and Dimensionality Reduction 14. Hebbian Learning 15. Fundamentals of Ensemble Learning 16. Advanced Boosting Algorithms 17. Modeling Neural Networks 18. Optimizing Neural Networks 19. Deep Convolutional Networks 20. Recurrent Neural Networks 21. Autoencoders 22. Introduction to Generative Adversarial Networks 23. Deep Belief Networks 24. Introduction to Reinforcement Learning 25. Advanced Policy Estimation Algorithms 26. Other Books You May Enjoy
27. Index

What this book covers

Chapter 1, Machine Learning Models Fundamentals, explains the most important theoretical concepts regarding machine learning models, including bias, variance, overfitting, underfitting, data normalization, and scaling.

Chapter 2, Loss Functions and Regularization, continues the exploration of fundamental concepts focusing on loss functions and discussing their properties and applications. The chapter also introduces the reader to the concept of regularization, which plays a fundamental role in the majority of supervised methods.

Chapter 3, Introduction to Semi-Supervised Learning, introduces the reader to the main elements of semi-supervised learning, discussing the main assumptions and focusing on generative algorithms, self-training, and cotraining.

Chapter 4, Advanced Semi-Supervised Classification, discusses the most important inductive and transductive semi-supervised classification methods, which overcome the limitations of simpler algorithms analyzed in Chapter 3.

Chapter 5, Graph-Based Semi-Supervised Learning, continues the exploration of semi-supervised learning algorithms belonging to the families of graph-based and manifold learning models. Label propagation and non-linear dimensionality reduction are analyzed in different contexts, providing some effective solutions that can be immediately exploited using scikit-learn functionalities.

Chapter 6, Clustering and Unsupervised Models, introduces some common and important unsupervised algorithms, such as k-Nearest Neighbors (based on K-d trees and Ball Trees), K-means (with K-means++ initialization). Moreover, the chapter discusses the most important metrics that can be employed to evaluate a clustering result.

Chapter 7, Advanced Clustering and Unsupervised Models, continues the discussion of more complex clustering algorithms, like spectral clustering, DBSCAN, and fuzzy clustering, which can solve problems that simpler methods fail to properly manage.

Chapter 8, Clustering and Unsupervised Models for Marketing, introduces the reader to the concept of biclustering, which can be employed in marketing contexts to create recommender systems. The chapter also presents the Apriori algorithm, which allows us to perform Market Basket Analysis on extremely large transaction databases.

Chapter 9, Generalized Linear Models and Regression, discusses the main concept of generalized linear models and how to perform different kinds of regression analysis (including regularized, isotonic, polynomial, and logistic regressions).

Chapter 10, Introduction to Time-Series Analysis, introduces the reader to the main concepts of time-series analysis, focusing on the properties of stochastic processes and on the fundamental models (AR, MA, ARMA, and ARIMA) that can be employed to perform effective forecasts.

Chapter 11, Bayesian Networks and Hidden Markov Models, introduces the concepts of probabilistic modeling using direct acyclic graphs, Markov chains, and sequential processes. The chapter focuses on tools like PyStan and algorithms like HMM, which can be employed to model temporal sequences.

Chapter 12, The EM Algorithm, explains the generic structure of the Expectation-Maximization (EM) algorithm. We discuss some common applications, such as generic parameter estimation, MAP and MLE approaches, and Gaussian mixture.

Chapter 13, Component Analysis and Dimensionality Reduction, introduces the reader to the main concepts of Principal Component Analysis, Factor Analysis, and Independent Component Analysis. These tools allow us to perform effective component analysis with different kinds of datasets and, if necessary, also a dimensionality reduction with controlled information loss.

Chapter 14, Hebbian Learning, introduces Hebb's rule, which is one of the oldest neuro-scientific concepts and whose applications are incredibly powerful. The chapter explains how a single neuron works and presents two complex models (Sanger networks and Rubner-Tavan networks) that can perform a Principal Component Analysis without the input covariance matrix.

Chapter 15, Fundamentals of Ensemble Learning, explains the main concepts of ensemble learning (bagging, boosting, and stacking), focusing on Random Forests and AdaBoost (with its variants both for classification and for regression).

Chapter 16, Advanced Boosting Algorithms, continues the discussion of the most important ensemble learning models focusing on Gradient Boosting (with an XGBoost example), and voting classifiers.

Chapter 17, Modeling Neural Networks, introduces the concepts of neural computation, starting with the behavior of a perceptron and continuing the analysis of the multi-layer perceptron, activation functions, back-propagation, stochastic gradient descent, dropout, and batch normalization.

Chapter 18, Optimizing Neural Networks, analyzes the most important optimization algorithms that can improve the performances of stochastic gradient descent (including Momentum, RMSProp, and Adam) and how to apply regularization techniques to the layers of a deep network.

Chapter 19, Deep Convolutional Networks, explains the concept of convolution and discusses how to build and train an effective deep convolutional network for image processing. All the examples are based on Keras/TensorFlow 2.

Chapter 20, Recurrent Neural Networks, introduces the concept of recurrent neural networks to manage time-series and discusses the structure of LSTM and GRU cells, showing some practical examples of time-series modeling and prediction.

Chapter 21, Auto-Encoders, explains the main concepts of an autoencoder, discussing its application in dimensionality reduction, denoising, and data generation (variational autoencoders).

Chapter 22, Introduction to Generative Adversarial Networks, explains the concept of adversarial training. We focus on Deep Convolutional GANs and Wasserstein GANs. Both techniques are extremely powerful generative models that can learn the structure of an input data distribution and generate brand new samples without any additional information.

Chapter 23, Deep Belief Networks, introduces the concepts of Markov random fields, Restricted Boltzmann Machines, and Deep Belief Networks. These models can be employed both in supervised and unsupervised scenarios with excellent performance.

Chapter 24, Introduction to Reinforcement Learning, explains the main concepts of Reinforcement Learning (agent, policy, environment, reward, and value) and applies them to introduce policy and value iteration algorithms and Temporal-Difference Learning (TD(0)). The examples are based on a custom checkerboard environment.

Chapter 25, Advanced Policy Estimation Algorithms, extends the concepts defined in the previous chapter, discussing the TD(λ) algorithm, TD(0) Actor-Critic, SARSA, and Q-Learning. A basic example of Deep Q-Learning is also presented to allow the reader to immediately apply these concepts to more complex environments. Moreover, the OpenAI Gym environment is introduced and a policy gradient example is shown and analyzed.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image