While measuring the total inference time of a model informs you of the feasibility of an application, you might sometimes need a more detailed performance report. To do so, TensorFlow offers several tools. In this section, we will discuss the trace tool, which is part of the TensorFlow summary package.
In Chapter 7, Training on Complex and Scarce Datasets, we described how to analyze the performance of input pipelines. Refer to this chapter to monitor preprocessing and data ingestion performance.
To use it, call trace_on and set profiler to True. You can then run TensorFlow or Keras operations and export the trace to a folder:
logdir = './logs/model'
writer = tf.summary.create_file_writer(logdir)
tf.summary.trace_on(profiler=True)
model.predict(train_images)
with writer.as_default():
tf.summary.trace_export('trace-model', profiler_outdir=logdir)
Omitting the call to create_file_writer and with...