References
This reference section serves as a repository of sources referenced within this book; you can explore these resources to further enhance your understanding and knowledge of the subject matter:
- Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I. (2018). Improving language understanding by generative pre-training. OpenAI.
- Hu, E. J., Shen, Y., Wallis, P., Li, Y., Wang, S., Wang, L., and Chen, W. (2021). LoRA: Low-Rank Adaptation of Large Language Models. ArXiv. /abs/2106.09685
- Zhang, Q., Chen, M., Bukharin, A., He, P., Cheng, Y., Chen, W., and Zhao, T. (2023). Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning. ArXiv. /abs/2303.10512
- Brown TB, Mann B, Ryder N, et al. 2020. Language Models are Few-Shot Learners. ArXiv:2005.14165.