Testing to ensure stability and improve accuracy
With the initial development of the use case complete, we can venture into testing out how well the automation performs with the ML Classifier and ML Extractor. Testing any automated workflow before deployment is crucial in order to ensure that the automation works as expected. In this section, we will investigate enabling the Validation Station for the ML Classifier and ML Extractor, as well as starting testing with sample data.
Enabling the Validation Station
During the development of the use case earlier, we deployed both the DocumentUnderstanding classifier and the Receipts ML skill to act as our classifier and extractor respectively. One of the reasons why we deployed these skills to AI Center was the ability to retrain these skills using the Validation Station. This gives us the ability to manually validate automation performance and retrain ML models, something we call Closing the Feedback Loop (Figure 8.36):