Implementing an MLP from scratch
Today, the process to create a neural network and its layers along with the backpropagation process has been encapsulated in deep learning frameworks. The differentiation process has been automated, where there is no actual need to define the derivative formulas manually. Removing the abstraction layer provided by the deep learning libraries will help to solidify your understanding of neural network internals. So, let’s create this neural network manually and explicitly with the logic to forward pass and backward pass instead of using the deep learning libraries:
- We’ll start by importing
numpy
and the methods from the scikit-learn library to load sample datasets and perform data partitioning:import numpy as np from sklearn import datasets from sklearn.model_selection import train_test_split
- Next, we define ReLU, the method that makes an MLP non-linear:
def ReLU(x): return np.maximum(x, 0)
- Now, let’s define...