Closing the feedback loop
As cognitive automation is moved from UAT into production, it can face data not seen from the training, evaluation, and testing test sets. We expect the deployed model to be able to handle new data based on the training performed, but there may be times where the model returns an unsatisfactory result or there are opportunities to further improve the model's performance based on the data it encounters.
This is where closing the feedback loop can play a large factor in the performance of an ML model. By closing the feedback loop on a Document Understanding or AI Center ML skill, we can capture unseen data points, using a human to send feedback to the ML skill and continuously train the skill with new data. You can see a representation of closing the feedback loop in the following screenshot:
With UiPath, developers can use a confidence threshold to allow automation to continue...