In this chapter, we learned how to leverage the TensorFlow Serving to serve the models in production environments. We also learned how to save and restore full models or selective models using both TensorFlow and Keras. We built a Docker container and served the sample MNIST example code in the Docker container from the official TensorFlow Serving repository. We also installed a local Kubernetes cluster and deployed the MNIST model to serve from TensorFlow Serving running in Kubernetes pods. We encourage the reader to build upon these examples and try out serving different models. TF Serving documentation describes various options and provides additional information enabling you to explore this topic further.
In the coming chapters, we will continue our journey with advanced models using transfer learning. The pre-trained models available in the TensorFlow repository are...