References
This reference section serves as a repository of sources referenced within this book; you can explore these resources to further enhance your understanding and knowledge of the subject matter:
- Gururangan, S., Marasović, A., Swayamdipta, S., Lo, K., Beltagy, I., Downey, D., & Smith, N. A. (2020). Don’t stop pretraining: Adapt language models to domains and tasks. In arXiv [cs.CL]. http://arxiv.org/abs/2004.10964/.
- Pruksachatkun, Y., Phang, J., Liu, H., Htut, P. M., Zhang, X., Pang, R. Y., Vania, C., Kann, K., & Bowman, S. R. (2020a). Intermediate-task transfer learning with pretrained language models: When and why does it work? Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
- Xie, Q., Dai, Z., Hovy, E., Luong, M.-T., & Le, Q. V. (n.d.). Unsupervised Data Augmentation for Consistency Training. Arxiv.org. Retrieved March 16, 2024, from http://arxiv.org/abs/1904.12848.
- Anaby-Tavor, A., Carmeli...