Densely connected convolutional networks (DenseNet)
![](https://static.packt-cdn.com/products/9781788629416/graphics/B08956_02_11.jpg)
Figure 2.4.1: A 4-layer Dense block in DenseNet. The input to each layer is made of all the previous feature maps.
DenseNet attacks the problem of vanishing gradient using a different approach. Instead of using shortcut connections, all the previous feature maps will become the input of the next layer. The preceding figure, shows an example of a dense interconnection in one Dense block.
For simplicity, in this figure, we'll only show four layers. Notice that the input to layer l is the concatenation of all previous feature maps. If we designate the BN-ReLU-Conv2D
as the operation H(x), then the output of layer l is:
![](https://static.packt-cdn.com/products/9781788629416/graphics/B08956_02_020.jpg)
(Equation 2.4.1)
Conv2D
uses a kernel of size 3. The number of feature maps generated per layer is called the growth rate, k. Normally, k = 12, but k = 24 is also used in the paper, Densely Connected Convolutional Networks, Huang, and others, 2017 [5]. Therefore, if the number of feature maps
![](https://static.packt-cdn.com/products/9781788629416/graphics/B08956_02_021.jpg)
is
![](https://static.packt-cdn.com/products/9781788629416/graphics/B08956_02_022.jpg)
, then the total number...