Chapter 9: Serving a TensorFlow Model
By now, after learning all the previous chapters, you have seen many facets of a model building process in TensorFlow Enterprise (TFE). Now it is time to wrap up what we have done and look at how we can serve the model we have built. In this chapter, we are going to look at the fundamentals of serving a TensorFlow model, which is through a RESTful API in localhost. The easiest way to get started is by using TensorFlow Serving (TFS). Out of the box, TFS is a system for serving machine learning models built with TensorFlow. Although it is not yet officially supported by TFE, you will see that it works with models built by TFE 2. It can run as either a server or as a Docker container. For our ease, we are going to use a Docker container, as it is really the easiest way to start using TFS, regardless of your local environment, as long as you have a Docker engine available. In this chapter, we will cover the following topics:
- Running Local...