In this chapter, we have taken a detailed look at the inference stage. Starting off by obtaining a basic understanding of what the end-to-end machine learning workflow looks like, we learned about the main steps involved in each stage. We also learned about the different abstractions that come into play while transferring models from the training phase to the inference phase. Taking a detailed look at the SavedModel format and the underlying dataflow model, we learned about the different options available to build and export models. We also learned about cool features such as tf.function and tf.autograph, which enable us to build TensorFlow graphs using native Python code. In the latter half of this chapter, we learned how to build inference pipelines for running TensorFlow models in different environments such as backend servers, web browsers, and even edge devices.
In...