Implementing preprocessing and postprocessing steps in a DL inference pipeline
Now that we have a basic generic MLflow Python model that can do prediction on an input pandas
DataFrame and produce output in another pandas
DataFrame, we are ready to tackle the multi-step inference scenario described before. Note that while the initial implementation in the previous section might not look earth-shaking, this opens doors for implementing preprocessing and postprocessing logic that was not possible before while maintaining the capability of using the generic mlflow.pyfunc.log_model
and mlflow.pyfunc.load_model
to treat the entire inference pipeline as a generic pyfunc
model, regardless of how complex the original DL model is and how many additional preprocessing and postprocessing steps there are. Let's see how we can do this in this section. You may want to check out the VS Code notebook for multistep_inference_model.py
from GitHub (https://github.com/PacktPublishing/Practical-Deep...