More generally, with the rise of big data some years ago, plenty of literature, frameworks, best practices, and more have appeared, offering new solutions to the processing and serving of huge amounts of data for all kinds of applications. The tf.data API was built by TensorFlow developers with those frameworks and practices in mind, in order to provide a clear and efficient framework to feed data to neural networks. More precisely, the goal of this API is to define input pipelines that are able to deliver data for the next step before the current step has finished (refer to the official API guide, https://www.tensorflow.org/guide/performance/datasets).
As explained in several online presentations by Derek Murray, one of the Google experts working on TensorFlow (one of his presentations was video recorded and is available at https://www.youtube.com/watch?v=uIcqeP7MFH0), pipelines built with the tf.data API are comparable to lazy lists in functional languages...