It is also useful to know that tf.data offers functions that combine some key operations for greater performance or more reliable results.
For example, tf.data.experimental.shuffle_and_repeat(buffer_size, count, seed) fuses together the shuffling and repeating operations, making it easy to have datasets shuffled differently at each epoch (refer to the documentation at https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/data/experimental/shuffle_and_repeat).
Back to optimization matters, tf.data.experimental.map_and_batch(map_func, batch_size, num_parallel_batches, ...) (refer to the documentation at https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/data/experimental/map_and_batch) applies the map_func function and then batches the results together. By fusing these two operations, this solution prevents some computational overheads and should thus be preferred.