Summary
In this chapter, we have explored the various methods to evaluate and interpret ML models. We have learned about production testing methods and the importance of packaging models, why and how to package models, and the various practicalities and tools for packaging models for ML model inference in production. Lastly, to understand the workings of packaging and de-packaging serialized models for inference, we performed the hands-on implementation of ML model inference using serialized models on test data.
In the next chapter, we will learn more about deploying your ML models. Fasten your seatbelts and get ready to deploy your models to production!