Zero-shot inference with pre-trained models
The NLP field has faced many major advances in the last few years, which means that many pre-trained, efficient models can be reused. These pre-trained, freely available models allow us to approach some NLP tasks with zero-shot inference since we can reuse those models. We’ll try this approach in this recipe.
Note
We sometimes use zero-shot inference (or zero-shot learning) and few-shot learning. Zero-shot learning means being able to perform a task without any training for this specific task; few-shot learning means performing a task while training only on a few samples.
Zero-shot inference is the act of reusing pre-trained models without doing any fine-tuning. There are many very powerful, free-to-use models available that can do just as well as a trained model of our own. Since the available models are trained on huge datasets with massive computational power, it is sometimes hard to compete with an in-house model that...