In this chapter, we began by discussing the different methods that are used to train artificial neural networks. We considered how traditional gradient descent-based methods differ from neuroevolution-based ones. Then, we presented one of the most popular neuroevolution algorithms (NEAT) and the two ways we can extend it (HyperNEAT and ES-HyperNEAT). Finally, we described the search optimization method (Novelty Search), which can find solutions to a variety of deceptive problems that cannot be solved by conventional objective-based search methods. Now, you are ready to put this knowledge into practice after setting up the necessary environment, which we will discuss in the next chapter.
In the next chapter, we will cover the libraries that are available so that we can experiment with neuroevolution in Python. We will also demonstrate how to set up a working environment...