Summary
In this chapter, we described techniques for the evaluation of AI models. This taught you to understand how to measure the performance of your model and how to know when the model is good enough to use in your application scenario. We introduced the concept and methods of model monitoring, both during training and in production. Furthermore, we showed an example of model experiment tracking and monitoring using MLflow. These methods and tools are a basis that can be used in your application scenarios to create a performant, stable, and resilient ML system. Lastly, we described approaches of AI with humans in the loop and introduced the idea of active learning. Moreover, we described different scenarios and strategies applied in the active learning paradigm. These strategies are often needed in cybersecurity scenarios because of the scarcity of prelabeled data.
In the next chapter, we’ll take the ideas of model monitoring and workflows and dive into the scenarios where...