With either online or offline learning, we should always institute systems and safety checks that will tell us when our model's predictions, or even its critical deployment architecture, are out of whack. By testing, we are referring to the hard-coded checking of inputs, outputs, and errors to ensure that our model is performing as intended. In standard software testing, for every input, there should be a defined output. This becomes difficult in the field of machine learning, where models will have variable outputs depending on a host of factors - not the great for standard testing procedures, is it? In this section, we'll talk about the process of testing machine learning code, and discuss best practices.
Once deployed, AI applications also have to be maintained. DevOps tools like Jenkins can help ensure that tests pass before...