Another key idea in TensorFlow is the deferred execution, during the building phase of the computational graph, you can compose very complex expressions (we say it is highly compositional), when you want to evaluate them through the running session phase, TensorFlow schedules the running in the most efficient manner (for example, parallel execution of independent parts of the code using the GPU).
In this way, a graph helps to distribute the computational load if one must deal with complex models containing a large number of nodes and layers.
Finally, a neural network can be compared to a composite function where each network layer can be represented as a function.
This consideration leads us to the next section, where the role of the computational graph in implementing a neural network is explained.