The group of models that we call artificial NNs are universal approximation machines; in other words, the functions that can imitate the behavior of any other function of interest. Here, I mean functions in a more mathematical meaning, as opposed to computer science: functions that take a real-valued input vector and return a real-valued output vector. This definition holds true for feed-forward NNs, which we will be discussing in this chapter. In the following chapters, we'll see networks that map an input tensor (multidimensional array) to an output tensor, and also networks that take their own outputs as an input.
We can think of a NN as a graph and the neuron as a node in a directed acyclic graph. Each such node takes some input and produces some output. Modern NNs are only loosely inspired by the biological brain. If you want...