In this section, we will design our model architecture. We are going to use the model architecture provided by NVIDIA.
- The code for developing the NVIDIA architecture for behavioral cloning can be seen in the following code block. Here we are going to use ADAM as optimizer, loss as MSE as output is steering angle and it is a regression problem. Also Exponential Linear Unit (ELU) is used as activation function. ELU is better than ReLU as it reduces cost function faster to zero. ELU is more accurate and good at solving vanishing gradient problem.You can read more about ELU at, https://arxiv.org/abs/1511.07289. Let's get started:
def nvidia_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='elu'))
model.add(tf.keras.layers.Convolution2D(36, 5, 5, subsample=(2, 2), activation='elu'))
model.add(tf.keras.layers.Convolution2D...