For more information, refer to the following resources:
- VideoBERT: A Joint Model for Video and Language Representation Learning by Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, Cordelia Schmid, available at https://arxiv.org/pdf/1904.01766.pdf
- BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer, available at https://arxiv.org/pdf/1910.13461.pdf
- ktrain: A Low-Code Library for Augmented Machine Learning by Arun S. Maiya, available at https://arxiv.org/pdf/2004.10703.pdf
- BERT-as-a-service documentation: https://bert-as-service.readthedocs.io/en/latest/