Summary
The importance of this chapter is that we have learned how to mitigate the burden of running large models under limited computational capacity. We first discussed and implemented how to make efficient models out of trained models using distillation, pruning, and quantization. It is important to pre-train a smaller general-purpose language model such as DistilBERT. Such light models can then be fine-tuned with good performance on a wide variety of problems compared to their non-distilled counterparts.
Second, we have gained knowledge about efficient sparse transformers that replace the full self-attention matrix with a sparse one using approximation techniques such as Linformer, BigBird, Performer, and so on. We have seen how they perform on various benchmarks such as computational complexity and memory complexity. The examples showed us these approaches are able to reduce the quadratic complexity to linear complexity without sacrificing the performance.
In the next chapter...