Method 0: Trial and error
Question-answering seems very easy. Is that true? Let's find out.
Open QA.ipynb
, the Google Colab notebook we will be using in this chapter. We will run the notebook cell by cell.
Run the first cell to install Hugging Face's transformers, the framework we will be implementing in this chapter:
!pip install -q transformers==4.0.0
We will now import Hugging Face's pipeline, which contains a vast amount of ready-to-use transformer resources. They provide high-level abstraction functions for the Hugging Face library resources to perform a wide range of tasks. We can access those NLP tasks through a simple API.
The pipeline
is imported with one line of code:
from transformers import pipeline
Once that is done, we have one-line options to instantiate transformer models and tasks:
- Perform an NLP task with the default
model
and defaulttokenizer
:
...pipeline("<task-name>")