Deploying inference pipelines
Real-life machine learning scenarios often involve more than one model. For example,you may need to run preprocessing steps on incoming data or reduce its dimensionality with the PCA algorithm.
Of course, you could deploy each model to a dedicated endpoint. However, orchestration code would be required to pass prediction requests to each model in sequence. Multiplying endpoints would also introduce additional costs.
Instead, inference pipelines let you deploy up to five models on the same endpoint or for batch transform, and automatically handle the prediction sequence.
Let's say that we wanted to run PCA and then Linear Learner. Building the inference pipeline would look like this:
- Train the PCA model on the input dataset.
- Process the training and validation sets with PCA and store the results in S3. Batch Transform is a good way to do this.
- Train the Linear Learner using the datasets processed by PCA as input.
- Use the...