Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Machine Learning Infrastructure and Best Practices for Software Engineers

You're reading from   Machine Learning Infrastructure and Best Practices for Software Engineers Take your machine learning software from a prototype to a fully fledged software system

Arrow left icon
Product type Paperback
Published in Jan 2024
Publisher Packt
ISBN-13 9781837634064
Length 346 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Miroslaw Staron Miroslaw Staron
Author Profile Icon Miroslaw Staron
Miroslaw Staron
Arrow right icon
View More author details
Toc

Table of Contents (24) Chapters Close

Preface 1. Part 1:Machine Learning Landscape in Software Engineering
2. Machine Learning Compared to Traditional Software FREE CHAPTER 3. Elements of a Machine Learning System 4. Data in Software Systems – Text, Images, Code, and Their Annotations 5. Data Acquisition, Data Quality, and Noise 6. Quantifying and Improving Data Properties 7. Part 2: Data Acquisition and Management
8. Processing Data in Machine Learning Systems 9. Feature Engineering for Numerical and Image Data 10. Feature Engineering for Natural Language Data 11. Part 3: Design and Development of ML Systems
12. Types of Machine Learning Systems – Feature-Based and Raw Data-Based (Deep Learning) 13. Training and Evaluating Classical Machine Learning Systems and Neural Networks 14. Training and Evaluation of Advanced ML Algorithms – GPT and Autoencoders 15. Designing Machine Learning Pipelines (MLOps) and Their Testing 16. Designing and Implementing Large-Scale, Robust ML Software 17. Part 4: Ethical Aspects of Data Management and ML System Development
18. Ethics in Data Acquisition and Management 19. Ethics in Machine Learning Systems 20. Integrating ML Systems in Ecosystems 21. Summary and Where to Go Next 22. Index 23. Other Books You May Enjoy

Monitoring ML systems at runtime

Monitoring pipelines in production is a critical aspect of MLOps to ensure the performance, reliability, and accuracy of deployed ML models. This includes several practices.

The first practice is logging and collecting metrics. This activity includes instrumenting the ML code with logging statements to capture relevant information during model training and inference. Key metrics to monitor are model accuracy, data drift, latency, and throughput. Popular logging and monitoring frameworks include Prometheus, Grafana, and Elasticsearch, Logstash, and Kibana (ELK).

The second one is alerting, which is a setup of alerts based on predefined thresholds for key metrics. This helps in proactively identifying issues or anomalies in the production pipeline. When an alert is triggered, the appropriate team members can be notified to investigate and address the problem promptly.

Data drift detection is the third activity, which includes monitoring the distribution...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image