Summary
In this chapter, we have learned how to mitigate the burden of running large models with limited computational capacity. We first discussed how to make efficient models out of trained models using distillation, pruning, and quantization. It is important to pre-train a small general-purpose language model such as DistilBERT. This light model can then be fine-tuned with good performance on a wide variety of problems compared to their non-distilled counterparts.
Second, we have learned about efficient sparse transformers that replace the full self-attention matrix with a sparse one using approximation techniques such as Linformer, BigBird, and Performer. We have seen how they perform on various benchmarks, such as computational complexity and memory complexity. The examples showed us that these approaches can reduce quadratic complexity to linear complexity without sacrificing performance.
As we gather more data over time, we aim for our models to operate more quickly. In...