Once the dataset objects have been created, transformed, and shuffled, and batching has been done, it needs to be fed into a model (remember the L of ETL from the beginning of this chapter). This step has had a major change in TF 2.0.
One primary difference in input data pipeline creation in TF 2.0 is in its simplicity. TF 1.x needs an iterator to feed a dataset to a model. In order to do this, there are several iterators to iterate a batch of data. One is by using the tf.data.Iterator API from the dataset objects. There are one-shot, initializable, re-initializable, and feedable iterators in TF 1.x. While these iterators are very powerful, they add a good amount of complexity as well—both in terms of understanding and coding. Fortunately, TF 2.0 onward has simplified this to a great extent by making dataset objects Python-iterable...