Using a trained model for inference
Now that you have trained some models using Elastic, we will look at how you can use them for prediction through a process called inference. It is a process of using trained ML models against incoming data in a continuous way. In the Elastic Stack, this process happens essentially through an inference processor in ingest pipelines or pipeline aggregation.
In this recipe, we’ll build upon the classification model we trained in the previous recipe, configure it into an ingest pipeline processor, and use it for prediction.
Getting ready
Make sure you have worked through the following recipes:
- Creating a Logstash pipeline in Chapter 5
- Creating visualizations from runtime fields in Chapter 6
- Building a model to perform regression analysis in this chapter
The snippets for this recipe can be found at https://github.com/PacktPublishing/Elastic-Stack-8.x-Cookbook/blob/main/Chapter8/snippets.md#using-trained-model-for...