CNNs offer simple solutions to these shortcomings. While they work the same way as the networks we introduced previously (such as feed-forward and backpropagation), some clever changes were brought to their architecture.
First of all, CNNs can handle multidimensional data. For images, a CNN takes as input three-dimensional data (height × width × depth) and has its own neurons arranged in a similar volume (refer to Figure 3.1). This leads to the second novelty of CNNs—unlike fully connected networks, where neurons are connected to all elements from the previous layer, each neuron in CNNs only has access to some elements in the neighboring region of the previous layer. This region (usually square and spanning all channels) is called the receptive field of the neurons (or the filter size):