Introducing catastrophic forgetting
Catastrophic forgetting was initially defined as a problem that occurs on (deep) neural networks. Deep neural networks are a set of very complex machine learning models that, thanks to their extreme complexity, are able to learn very complex patterns. Of course, this is the case only when there is enough data.
Neural networks have been studied for multiple decades. They used to be mathematically interesting but practically infeasible to execute due to the lack of computing power. The current-day progress in computing power has made it possible for neural networks to gain the popularity that they are currently observing.
The complexity of neural networks also makes them sensitive to the problem of catastrophic forgetting. The way a neural network learns (from a high point of view) is by making many update passes to the coefficients and at every update, the model should fit a little bit better to the data. A schematic overview of a neural network...