Like serving a website, we need to serve the trained model so that the model can be used for making predictions to perform business goals. Web/software serving is already at a mature stage. So, we have sophisticated, agreed-upon tools and strategies to serve software. However, ML model serving is still in the phase of growth, and new ideas and tools are coming almost every day.
Model serving can be defined as bringing a model to production by deploying it to a location and providing some access points for users to pass data for prediction and get prediction results.
Model serving usually involves the following steps:
- Saving the trained model: The format in which the model needs to be saved can be different based on the serving tool. So, usually, the serving tools provide a function for saving the model to ensure the model is saved in a format needed by the library.
Let’s use BentoML as an example. We’ll cover BentoML in more detail in Chapter 14, but in the following code snippet taken from the BentoML official site, https://docs.bentoml.org/en/latest/tutorial.html, we see that the popular model-serving library BentoML provides a save function for each of the ML frameworks. During serving using BentoML, we have to call the save method on the appropriate framework. For example, if we have developed a model using sklearn
, we need to call the bentoml.sklearn.save_model(<MODEL_NAME>, model)
method to save the model in BentoML format from sklearn format:
import bentoml
from sklearn import svm
from sklearn import datasets
# Load training data set
iris = datasets.load_iris()
X, y = iris.data, iris.target
# Train the model
clf = svm.SVC(gamma='scale')
clf.fit(X, y)
# Save model to the BentoML local model store
saved_model = bentoml.sklearn.save_model("iris_clf", clf)
print(f"Model saved: {saved_model}")
# Model saved: Model(tag="iris_clf:zy3dfgxzqkjrlgxi")
We can see the list of ML frameworks BentoML currently supports in its GitHub code repository: https://github.com/bentoml/BentoML. At the time of writing, they support the following frameworks shown in Figure 1.3. We will discuss BentoML in detail in Chapter 14.
Figure 1.3 – BentoML-supported frameworks. Some of these are still in the experimental phase
- Annotate the access points: In this stage, we usually create a service module where we create a function that will be executed when a user makes a request for a prediction. This method is annotated so that after deployment to the model-serving tool, it is exposed via a REST API. BentoML uses a special file, called a
service.py
file, to do this annotation and defining the method that will be annotated. For example, let’s look at the classify(..)
method in the service.py
file. It has been annotated with svc.api()
and the input/output formats are also specified. The service.py
code is annotated with the service access point:import numpy as np
import bentoml
from bentoml.io import NumpyNdarray
iris_clf_runner = bentoml.sklearn.get( "iris_clf:latest").to_runner()
svc = bentoml.Service("iris_classifier", runners=[ iris_clf_runner])
@svc.api(input=NumpyNdarray(), output=NumpyNdarray())
def classify(input_series: np.ndarray) -> np.ndarray:
result = iris_clf_runner.predict.run( input_series)
return result
- Deploy the saved model to a model-serving tool: In this stage, the model is stored or uploaded to a location needed by the library. Usually, the library takes care of the process behind the scenes and you just need to start the deployment by triggering a command. Sometimes, before deploying a special library, specific packaging might be needed. For example, BentoML creates a special deployable package called a Bento. To build a Bento, you need to first create a
bentofile.yaml
file in the project directory with which to provide the different parameters of the Bento. A sample bentofile.yaml
file is shown in Figure 1.4.
Figure 1.4 – A sample bentofile.yaml file that needs to be created before building a Bento
After that, we can create a Bento using the bentoml build
command from the command line. The command will build the Bento and you will see some messages in the console, as in Figure 1.5.
Figure 1.5 – Sample bentoml build output
Please keep in mind that running the bentoml build
command in a directory with venv
can take a long time because it scans the whole directory before running the command. This example was run without creating a virtual environment.
Bentos will be saved in a local directory. We can see the Bentos using the bentoml list
command, as in Figure 1.6.
Figure 1.6 – All the Bentos can be seen using the bentoml list command
Then, from the console, we can run the bentoml serve <MODEL_NAME:TAG> --production
command to serve the model. <TAG>
can be replaced by the appropriate tag, shown in Figure 1.6.
- Version controlling of the model: The model-serving tool also takes care of the version controlling behind the scenes for you. When a new version is uploaded, the APIs exposed from the model-serving tool use the latest model. For example, BentoML uses Tag to refer to different versions. To serve the latest version, you can use
<MODEL_NAME:latest>
. This will pick up the latest-<MODEL_NAME>.
In this section, we got a high-level understanding of model serving. In the next section, we will discuss the importance of model serving.