Implementing an inference pipeline as a new entry point in the main MLproject
Now that we have successfully implemented a multi-step inference pipeline as a new custom MLflow model, we can go one step further by incorporating this as a new entry point in the main MLproject so that we can run the following entire pipeline end to end (Figure 7.8). Check out this chapter's code from GitHub to follow through and run the pipeline in your local environment.
We can add the new entry point inference_pipeline_model
into the MLproject
file. You can check out this file on the GitHub repository (https://github.com/PacktPublishing/Practical-Deep-Learning-at-Scale-with-MLFlow/blob/main/chapter07/MLproject):
inference_pipeline_model: parameters: finetuned_model_run_id: { type: str, default: None } command: "python pipeline...