Coding the blocks of FC-DenseNet
DenseNet is very flexible, so you can easily configure it in many ways. However, depending on the hardware of your computer, you might hit the limits of your GPU. The following are the values that I used on my computer, but feel free to change them to achieve better accuracy or to reduce the memory consumption or the time required to train the network:
- Input and output resolution: 160 X 160
- Growth rate (number of channels added by each convolutional layer in a dense block): 12
- Number of dense blocks: 11: 5 down, 1 to transition between down and up, and 5 up
- Number of convolutional blocks in each dense block: 4
- Batch size: 4
- Bottleneck layer in the dense blocks: No
- Compression factor: 0.6
- Dropout: Yes, 0.2
We will define some functions that you can use to build FC-DenseNet and, as usual, you are invited to check out the full code on GitHub.
The first function just defines a convolution with batch normalization:
def dn_conv...