It is important to note that neural nets have non-linear activation functions, that is, the functions that are applied to the sum of the weighted inputs to a neuron. Linear activation units are not able to map input layers to output layers in all but trivial neural net models.
There are a number of activation functions in common use, including sigmoid, tanh, ReLU, and leaky ReLU. A good summary, together with diagrams of these functions, may be found here: https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6.