Evaluation, Monitoring, and Feedback Loop
In the previous parts of the book, we have provided extensive information about AI models and their applications. We have enabled you to find appropriate models for various cybersecurity use cases, understand them, and train the models on available data. In this chapter, you will learn how to evaluate the trained model to make sure its performance is optimal.
To have an AI model as part of an IT system, the model functionality and performance need to be tested and evaluated properly. The evaluation, testing, and monitoring should be done continuously to maintain the model’s performance and overall quality of service. Models must be monitored for performance in terms of the quality of predictions and other parameters such as latency, availability, and bias. We’ll introduce tools that enable evaluation and performance tracking. Lastly, we’ll consider that models often need to have a human in the loop to maintain and improve...