TensorFlow models were originally defined using tf1.layers. As this module has been deprecated in TensorFlow 2, the replacement of choice is tf.keras.layers. To train a model using TensorFlow 1, a train operation has to be defined using an optimizer and a loss. For instance, if y is the output of a fully connected layer, we would define the training operation using the following command:
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=output, logits=y))
train_step = tf.train.AdamOptimizer(1e-3).minimize(cross_entropy)
Every time we call this operation, a batch of images will be fed to the network and a single step of backpropagation will happen. We then run a loop to compute multiple training steps:
num_steps = 10**7
with tf1.Session() as sess:
sess.run(tf1.global_variables_initializer())
for i in range(num_steps):
batch_x, batch_y = next(batch_generator)
sess.run(train_step, feed_dict={x: batch_x, y: batch_y})
When...