Deep learning is a filled undergoing intense research activity. Among other things, researchers are devoted to inventing new neural network architectures that can either tackle new problems or increase the performance of previously implemented architectures. In this section, we study both old and new architectures.
Older architectures have been used to solve a large array of problems and are generally considered the right choice when starting a new project. Newer architectures have shown great successes in specific problems, but are harder to generalize. The latter is interesting as references for what to explore next but is hardly a good choice when starting a project.