An example of DCNN: LeNet
Yann LeCun, who won the Turing Award, proposed [1] a family of ConvNets named LeNet, trained for recognizing MNIST handwritten characters with robustness to simple geometric transformations and distortion. The core idea of LeNet is to have lower layers alternating convolution operations with max-pooling operations. The convolution operations are based on carefully chosen local receptive fields with shared weights for multiple feature maps. Then, higher levels are fully connected based on a traditional MLP with hidden layers and softmax as the output layer.
LeNet code in TF
To define a LeNet in code, we use a convolutional 2D module (note that tf.keras.layers.Conv2D
is an alias of tf.keras.layers.Convolution2D
, so the two can be used in an interchangeable way – see https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D):
layers.Convolution2D(20, (5, 5), activation='relu', input_shape=input_shape)
where the first...