Now that you know how to build a basic neural network, let's go through the purpose of some of the elements of your model. One of those elements was the Sigmoid, which is an activation function. Sometimes these are also called transfer functions.
As you have learned previously, a given layer can be simply defined as weights applied to inputs; add some bias and then decide on activation. An activation function decides whether a neuron is fired. We also put this into the network to help to create more complex relationships between input and output. While doing this, we also need it to be a function that works with our backpropagation, so that we can easily optimize our weighs via an optimization method (that is, gradient descent). This means that we need the output of the function to be differentiable.
There are a few things to consider when choosing an...