Adaptive learning methods
Adaptive learning refers to incremental methods with drift adjustment. This concept refers to updating predictive models online to react to concept drifts. The goal is that by taking drift into account, models can ensure consistency with the current data distribution.
Ensemble methods can be coupled with drift detectors to trigger the retraining of base models. They can monitor the performance of base models (often with ADWIN) – underperforming models get replaced with retrained models if the new models are more accurate.
As a case in point, the Adaptive XGBoost algorithm (AXGB; Jacob Montiel and others, 2020) is an adaptation of XGBoost for evolving data streams, where new subtrees are created from mini-batches of data as new data becomes available. The maximum ensemble size is fixed, and once this size is reached, the ensemble is updated on new data.
In the Scikit-Multiflow and River libraries, there are several methods that couple machine...