Fine-Tuning Language Models for Token Classification
In this chapter, we will learn about fine-tuning language models for token classification. Tasks such as Named Entity Recognition (NER), Part-of-Speech (POS) tagging, and Question Answering (QA) are explored in this chapter. We will learn how a specific language model can be fine-tuned on such tasks. We will focus on BERT more than other language models. You will learn how to apply POS, NER, and QA using BERT. You will get familiar with the theoretical details of these tasks, such as their respective datasets and how to perform them. After finishing this chapter, you will be able to perform any token classification task using Transformers.
In this chapter, we will fine-tune BERT for the following tasks: fine-tuning BERT for token classification problems such as NER, fine-tuning a language model for an NER problem, and thinking of the QA problem as a start/stop token classification.
The following topics will be covered in this...