Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Transformers

You're reading from   Mastering Transformers The Journey from BERT to Large Language Models and Stable Diffusion

Arrow left icon
Product type Paperback
Published in Jun 2024
Publisher Packt
ISBN-13 9781837633784
Length 462 pages
Edition 2nd Edition
Languages
Tools
Concepts
Arrow right icon
Authors (2):
Arrow left icon
Savaş Yıldırım Savaş Yıldırım
Author Profile Icon Savaş Yıldırım
Savaş Yıldırım
Meysam Asgari- Chenaghlu Meysam Asgari- Chenaghlu
Author Profile Icon Meysam Asgari- Chenaghlu
Meysam Asgari- Chenaghlu
Arrow right icon
View More author details
Toc

Table of Contents (25) Chapters Close

Preface 1. Part 1: Recent Developments in the Field, Installations, and Hello World Applications
2. Chapter 1: From Bag-of-Words to the Transformers FREE CHAPTER 3. Chapter 2: A Hands-On Introduction to the Subject 4. Part 2: Transformer Models: From Autoencoders to Autoregressive Models
5. Chapter 3: Autoencoding Language Models 6. Chapter 4: From Generative Models to Large Language Models 7. Chapter 5: Fine-Tuning Language Models for Text Classification 8. Chapter 6: Fine-Tuning Language Models for Token Classification 9. Chapter 7: Text Representation 10. Chapter 8: Boosting Model Performance 11. Chapter 9: Parameter Efficient Fine-Tuning 12. Part 3: Advanced Topics
13. Chapter 10: Large Language Models 14. Chapter 11: Explainable AI (XAI) in NLP 15. Chapter 12: Working with Efficient Transformers 16. Chapter 13: Cross-Lingual and Multilingual Language Modeling 17. Chapter 14: Serving Transformer Models 18. Chapter 15: Model Tracking and Monitoring 19. Part 4: Transformers beyond NLP
20. Chapter 16: Vision Transformers 21. Chapter 17: Multimodal Generative Transformers 22. Chapter 18: Revisiting Transformers Architecture for Time Series 23. Index 24. Other Books You May Enjoy

Using TL with Transformers

TL is a field of AI that aims to make models reusable for different tasks—for example, a model trained on a given task such as A is reusable (fine-tuning) on a different task such as B. In an NLP field, this is achievable by using Transformer-like architectures that can capture the understanding of language itself by language modeling. Such models are called language models—they provide a model for the language they have been trained on. TL is not a new technique, and it has been used in various fields such as computer vision. ResNet, Inception, Visual Geometry Group (VGG), and EfficientNet are examples of such models that can be used as pre-trained models able to be fine-tuned on different computer vision tasks.

Shallow TL using models such as Word2vec, GloVe, and Doc2vec is also possible in NLP. It is called shallow because there is no model to be transferred behind this kind of TL and instead, the pre-trained vectors for words/tokens are transferred. You can use these token- or document-embedding models followed by a classifier or use them combined with other models such as RNNs instead of using random embeddings.

TL in NLP using Transformer models is also possible because these models can learn a language itself without any labeled data. Language modeling is a task used to train transferable weights for various problems. Masked language modeling is one of the methods used to learn a language itself. As with Word2vec’s window-based model for predicting center tokens, in masked language modeling, a similar approach takes place but with key differences. Given a probability, each word is masked and replaced with a special token such as [MASK]. The language model (a Transformer-based model, in our case) must predict the masked words. Instead of using a window, unlike with Word2vec, a whole sentence is given, and the output of the model must be the same sentence with masked words filled.

One of the first models that used the Transformer architecture for language modeling is BERT, which is based on the encoder part of the Transformer architecture. Masked language modeling is accomplished by BERT using the same method described before and after training a language model. BERT is a transferable language model for different NLP tasks such as token classification, sequence classification, or even question answering.

Each of these tasks is a fine-tuning task for BERT once a language model is trained. BERT is best known for its key characteristics on the base Transformer encoder model, and by altering these characteristics, different versions of it—small, tiny, base, large, and extra-large—are proposed. Contextual embedding enables a model to have the correct meaning of each word based on the context in which it is given—for example, the word cold can have different meanings in two different sentences: cold-hearted killer and cold weather. The number of layers at the encoder part, the input dimension, the output embedding dimension, and the number of multi-head attention mechanisms are key characteristics, as illustrated in the following figure:

Figure 1.14 – Pre-training and fine-tuning procedures for BERT (image inspiration from J. Devlin et al., Bert: Pre-training of deep bidirectional Transformers for language understanding, 2018)

Figure 1.14 – Pre-training and fine-tuning procedures for BERT (image inspiration from J. Devlin et al., Bert: Pre-training of deep bidirectional Transformers for language understanding, 2018)

As you can see in Figure 1.14, the pre-training phase also consists of another objective known as next-sentence prediction. As we know, each document is composed of sentences followed by each other, and another important part of training a model to grasp the language is to understand the relationships of sentences to each other—in other words, whether they are related or not. To achieve these tasks, BERT introduced special tokens such as [CLS] and [SEP]. A [CLS] token is an initially meaningless token used as the starting token for all tasks, and it contains all information about the sentence. In sequence-classification tasks such as next sentence prediction (NSP), a classifier on top of the output of this token (output position of 0) is used. It is also useful in evaluating the sense of a sentence or capturing its semantics—for example, when using a Siamese BERT model, comparing these two [CLS] tokens for different sentences by a metric such as cosine similarity is very helpful. On the other hand, [SEP] is used to distinguish between two sentences, and it is only used to separate two sentences. After pre-training, if someone aims to fine-tune BERT on a sequence classification task such as sentiment analysis, they will use a classifier on top of the output embedding of [CLS]. It is also notable that all TL models can be frozen during fine-tuning or freed; frozen means seeing all weights and biases inside the model as constants and stopping training on them. In the example of sentiment analysis, just the classifier will be trained, not the model if it is frozen.

In the next section, you will learn about multimodal learning. You will also get familiar with different architectures that use this learning paradigm with respect to Transformers.

You have been reading a chapter from
Mastering Transformers - Second Edition
Published in: Jun 2024
Publisher: Packt
ISBN-13: 9781837633784
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image