What this book covers
Chapter 1, Up and Running with MXNet, To start working with MXNet, we need to install the library. There are several different versions of MXNet available to be installed, and in this chapter, we will cover how to help you choose the right version. The most important parameter will be the available hardware we have. In order to optimize performance, it is always best to maximize the use of our available Hardware. We will compare the usage of a well-known linear algebra library, NumPy, and how MXNet provides similar operations. We will then compare the performance of the different MXNet versions vs. Numpy.
Chapter 2, Working with MXNet and Visualizing Datasets: Gluon and DataLoader, In this chapter, we will start using MXNet to analyze some toy datasets in the domains of numerical regression, data classification, image classification and text classification. To manage those tasks efficiently, we will see new MXNet libraries and functions such as Gluon and DataLoader.
Chapter 3, Solving Regression Problems, In this chapter, we will learn how to use MXNet and Gluon libraries to apply supervised learning to solve regression problems. We will explore and understand a house prices dataset and will learn how to predict the price of a house. To achieve this objective, we will train neural networks and study the effect of the different hyper-parameters.
Chapter 4, Solving Classification Problems, In this chapter, we will learn how to use MXNet and Gluon libraries to apply supervised learning to solve classification problems. We will explore and understand a flowers dataset and will learn how to predict the type of a flower given some metrics. To achieve this objective, we will train neural networks and study the effect of the different hyper-parameters.
Chapter 5, Analyzing Images with Computer Vision, In this chapter, the reader will understand the different architectures and operations available in MXNet/GluonCV to work with images. Furthermore, the readers will get introduced to classic Computer Vision problems: Image Classification, Object Detection and Semantic Segmentation. They will then learn how to leverage MXNetGluonCV Model Zoo to use pre-existing models to solve these problems.
Chapter 6, Understanding Text with Natural Language Processing, In this chapter, the reader will understand the different architectures and operations available in MXNet/GluonNLP to work with text datasets. Furthermore, the readers will get introduced to classic Natural Language Processing problems: Word Embeddings, Text Classification, Sentiment Analysis and Translation. They will then learn how to leverage GluonNLP Model Zoo to use pre-existing models to solve these problems.
Chapter 7, Optimizing Models with Transfer Learning and Fine-Tuning, In this chapter, the reader will understand how to optimize pre-trained models for specific tasks using Transfer Learning and Fine-Tuning techniques. Furthermore, the readers will compare the performance of these techniques against training a model from scratch and the trade-offs involved. The reader will apply these techniques to problems such as image classifcation, image segmentation and translating text from English to German.
Chapter 8, Improving Training Performance with MXNet, In this chapter, the reader will learn how to leverage MXNet and Gluon libraries to optimize deep learning training loops. The reader will learn how MXNet and Gluon can take advantage of computational paradigms such as Lazy Evaluation and Automatic Parallelization. Furthermore, the reader will also learn to optimize Gluon DataLoaders for CPU and GPU, to apply Automatic Mixed Precision (AMP) and to train with multiple GPUs.
Chapter 9, Improving Inference Performance with MXNet, In this chapter, the reader will learn how to leverage MXNet and Gluon libraries to optimize deep learning inference. The reader will learn how MXNet and Gluon can take advantage of hybridizing Machine Learning models (combining imperative and symbolic programming). Furthermore, the reader will also learn to optimize inference time by applying Float16 data type combined with AMP, quantizing their models and profiling to find out further gains.