The classic multilayer perceptron (MLP) or artificial neural network (ANN) without any hidden units in their topology is only capable of solving linearly separable problems correctly. As a result, such ANN configurations cannot be used for pattern recognition or control and optxor_experiment.pyimization tasks. However, with more complex MLP architectures that include some hidden units with a kind of non-linear activation function (such as sigmoid), it is possible to approximate any function to the given accuracy. Thus, a non-linearly separable problem can be used to study whether a neuroevolution process can grow any number of hidden units in the ANN of the solver phenotype.
The XOR problem solver is a classic computer science experiment in the field of reinforcement learning that cannot be solved without introducing non-linear execution to the solver algorithm...