Creating the BYOM local inference model
With BYOM local inference, the machine learning model and its dependencies are packaged into a group of files and deployed to Amazon Redshift where the data is stored, allowing users to make predictions on the stored data. Model artifacts and their dependencies are created when a model is trained and created on the Amazon SageMaker platform. By deploying the model directly onto the Redshift service, you are not moving the data over the network to another service. Local inference can be useful for scenarios where the data is sensitive or requires low latency predictions.
Let’s start working on creating the BYOM local inference model.
Creating a local inference model
To create the BYOM local inference model, the first step involves training and validating an Amazon SageMaker model. For this purpose, we will train and validate an XGBoost linear regression machine learning model on Amazon SageMaker. Follow the instructions found here...