Testing to ensure stability and improve accuracy
With the initial development of the use case complete, we can venture into testing out how well our automation performs. Testing any automated workflow before deployment is crucial in order to ensure the automation works as expected. In this section, we will test with sample data, as well as adding retraining functionality to our automation to retrain the ML skill.
Testing with sample emails
To test out the functionality of our use case, we must prepare our inbox with sample email messages for automation to run through. In Chapter 7, Testing and Refining Development Efforts, we discussed that when training cognitive automation, we should have multiple datasets to test our ML skill, as outlined here:
- Training dataset: Initial dataset to train the ML model
- Evaluation dataset: A smaller set of data used to evaluate the training of the ML model
- Testing dataset: An unbiased dataset used to expose the trained ML model...