What this book covers
Chapter 1, Introduction to ML Engineering, explains the core concepts of machine learning engineering and machine learning operations. Roles within ML teams are discussed in detail and the challenges of ML engineering and MLOps are laid out.
Chapter 2, The Machine Learning Development Process, explores how to organize and successfully execute an ML engineering project. This includes a discussion of development methodologies like Agile, Scrum and CRISP-DM, before sharing a project methodology developed by the author that is referenced to throughout the book. This chapter also introduces continuous integration/continouous deployment (CI/CD) and developer tools.
Chapter 3, From Model to Model Factory, shows how to standardize, systematize and automate the process of training and deploying machine learning models. This is done using the author’s concept of the model factory, a methodology for repeatable model creation and validation. This chapter also discusses key theoretical concepts important for understanding machine learning models and covers different types of drift detection and model retrain triggering criteria.
Chapter 4, Packaging Up, discusses best practices for coding in Python and how this relates to building your own packages, libraries and components for reuse in multiple projects. This chapter covers fundamental Python programming concepts before moving onto more advanced concepts, and then discusses package and environment management, testing, logging and error handling and security.
Chapter 5, Deployment Patterns and Tools, teaches you some of the standard ways you can design your ML system and then get it into production. This chapter focusses on architectures, system design and deployment patterns first before moving onto using more advanced tools to deploy microservices, including containerization and AWS Lambda. The popular ZenML and Kubeflow pipelining and deployment platforms are then reviewed in detail with examples.
Chapter 6, Scaling Up, is all about developing with large datasets in mind. For this, the Apache Spark and Ray frameworks are discussed in detail with worked examples. The focus for this chapter is on scaling up batch workloads where massive compute is required.
Chapter 7, Deep Learning, Generative AI and LLMOps, covers the latest concepts and techniques for training and deploying deep learning models for production use cases. This chapter includes material discussing the new wave of generative models, with a particular focus on Large Language Models (LLMs) and the challenges for ML engineers looking to productionize these models. This leads us onto define the core elements of LLM Operations (LLMOps).
Chapter 8, Building an Example ML Microservice, walks through the building of a machine learning microservice that serves a forecasting solution using FastAPI, Docker and Kubernetes. This pulls together many of the previous concepts developed throughout the book.
Chapter 9, Building an Extract, Transform, Machine Learning Use Case, builds out an example of a batch processing ML system that leverages standard ML algorithms and augments these with the use of LLMs. This shows a concrete application of LLMs and LLMOps, as well as providing a more advanced discussion of Airflow DAGs.