Building a baseline model
These days, everybody will build a baseline model by at least fine-tuning a Transformer architecture. Since the 2017 paper Attention Is All You Need (Reference 14), the performance of these solutions has continuously improved, and for competitions like Jigsaw Unintended Bias in Toxicity Classification, a recent Transformer-based solution will probably take you easily into the gold zone.
In this exercise, we will start with a more classical baseline. The core of this solution is based on contributions from Christof Henkel (Kaggle nickname: Dieter), Ane Berasategi (Kaggle nickname: Ane), Andrew Lukyanenko (Kaggle nickname: Artgor), Thousandvoices (Kaggle nickname), and Tanrei (Kaggle nickname); see References 12, 13, 15, 16, 17, and 18.
The solution includes four steps. In the first step, we load the train and test data as pandas
datasets and then we perform preprocessing on the two datasets. The preprocessing is largely based on the preprocessing steps...