Packaging, running, and monitoring a model using Seldon Core
In this section, you will package and build the container from the model file you built in Chapter 6, Machine Learning Engineering. You will then use the Seldon Deployment to deploy and access the model. Later in this book, you will automate the process, but to do it manually, as you'll do in this section, we will further strengthen your understanding of the components and how they work.
Before you start this exercise, please make sure that you have created an account with a public Docker registry. We will use the free quay.io
as our registry, but you are free to use your preferred one:
- Let's first verify that MLflow and Minio (our S3 server) are running in our cluster:
kubectl get pods -n ml-workshop | grep -iE 'mlflow|minio'
You should see the following response:
- Get the ingress list for...