Introducing TensorFlow Serving
In this section, we will provide a high-level introduction to the TensorFlow architecture and its key concepts. TensorFlow is designed to provide a high-performance production serving environment for serving machine learning models. TensorFlow provides default integration with TensorFlow models but it can be extended to other models as well, such as scikit-learn. To learn more about integrating other libraries with TensorFlow Serving, please go to https://www.tensorflow.org/tfx/guide/non_tf. TensorFlow provides support for easily deploying new models by keeping the architecture and the APIs the same, making provisioning the versioning support of the models in production easier.
To understand the architecture of TensorFlow, you first need to understand the following key concepts.
Servable
A servable is the name of the abstraction object that is used by the client for computation or inference. For example, if a client makes a prediction request...