Creating an API process for inference
The code required for this section is in the pystock-inference-api folder
. The MLflow infrastructure is provided in the Docker image accompanying the code as shown in the following figure:
Setting up an API system is quite easy by relying on the MLflow built-in REST API environment. We will rely on the artifact store on the local filesystem to test the APIs.
With the following set of commands, which at its core consists of using the models serve
command in the CLI, we can serve our models:
cd /gradflow/ export MLFLOW_TRACKING_URI=http://localhost:5000 mlflow models serve -m "models:/training-model-psystock/Production" -p 6000
We next will package the preceding commands in a Docker image so it can be used on any environment for deployment. The steps to achieve this are the following:
- Generate a Docker image specifying the work directory and the...