Summary
In this chapter, you have seen how the three major sources of reusable model elements can integrate with the scalable data pipeline. Through TensorFlow datasets and TensorFlow I/O APIs, training data is streamed into the model training process. This enables models to be trained without having to deal with the compute node's memory.
TensorFlow Hub sits at the highest level of model reusability. There, you will find many open source models already built for consumption via a technique known as transfer learning. In this chapter, we built a regression model using the tf.keras
API. Building a model this way (custom) is actually not a straightforward task. Often, you will spend a lot of time experimenting with different model parameters and architectures. If your need can be addressed by means of pre-built open source models, then TensorFlow Hub is the place. However, for these pre-built models, you still need to investigate the data structure required for the input layer...