As we introduced in Chapter 3, TensorFlow Graph Architecture, TensorFlow works by building a computational graph first and then executing it. In TensorFlow 2.0, this graph definition is hidden and simplified; the execution and the definition can be mixed, and the flow of execution is always the one that's found in the source code—there's no need to worry about the order of execution in 2.0.
Prior to the 2.0 release, developers had to design the graph and the source by following this pattern:
- How can I define the graph? Is my graph composed of multiple layers that are logically separated? If so, I have to define every logical block inside a different tf.variable_scope.
- During the training or inference phase, do I have to use a part of the graph more than once in the same execution step? If so, I have to define this part by wrapping it...