Introduction to CNNs
Just to refresh your memory, FNNs are fully interconnected NNs where all nodes in the preceding layer are connected to every other node neuron in the next subsequent layer, and so on (Figure 5.1). Each edge or connection has a weight, that is either initialized randomly or derived from domain knowledge and ultimately learned by the algorithm during model training. The weights are then multiplied by the input values from all the node’s neurons, and then the sum of all the nodes in the input layers is then passed on to the next layer, along with a bias that is then used by an activation function to signal whether that output will be passed on to the next layer or not. The process repeats in each layer until the final output layer, which has one to many neurons, depending on the type of learning and whether generating a prediction or classification. FNNs work great for structured data where you have features and samples as input:
Figure...