Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Mastering Transformers
Mastering Transformers

Mastering Transformers: Build state-of-the-art models from scratch with advanced natural language processing techniques

Arrow left icon
Profile Icon Savaş Yıldırım Profile Icon Meysam Asgari- Chenaghlu
Arrow right icon
Free Trial
Full star icon Full star icon Full star icon Full star icon Empty star icon 4 (9 Ratings)
Paperback Sep 2021 374 pages 1st Edition
eBook
Mex$811.99 Mex$902.99
Paperback
Mex$1128.99
Subscription
Free Trial
Arrow left icon
Profile Icon Savaş Yıldırım Profile Icon Meysam Asgari- Chenaghlu
Arrow right icon
Free Trial
Full star icon Full star icon Full star icon Full star icon Empty star icon 4 (9 Ratings)
Paperback Sep 2021 374 pages 1st Edition
eBook
Mex$811.99 Mex$902.99
Paperback
Mex$1128.99
Subscription
Free Trial
eBook
Mex$811.99 Mex$902.99
Paperback
Mex$1128.99
Subscription
Free Trial

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Mastering Transformers

Chapter 1: From Bag-of-Words to the Transformer

In this chapter, we will discuss what has changed in Natural Language Processing (NLP) over two decades. We experienced different paradigms and finally entered the era of Transformer architectures. All the paradigms help us to gain a better representation of words and documents for problem-solving. Distributional semantics describes the meaning of a word or a document with vectorial representation, looking at distributional evidence in a collection of articles. Vectors are used to solve many problems in both supervised and unsupervised pipelines. For language-generation problems, n-gram language models have been leveraged as a traditional approach for years. However, these traditional approaches have many weaknesses that we will discuss throughout the chapter.

We will further discuss classical Deep Learning (DL) architectures such as Recurrent Neural Networks (RNNs), Feed-Forward Neural Networks (FFNNs), and Convolutional Neural Networks (CNNs). These have improved the performance of the problems in the field and have overcome the limitation of traditional approaches. However, these models have had their own problems too. Recently, Transformer models have gained immense interest because of their effectiveness in all NLP tasks, from text classification to text generation. However, the main success has been that Transformers effectively improve the performance of multilingual and multi-task NLP problems, as well as monolingual and single tasks. These contributions have made Transfer Learning (TL) more possible in NLP, which aims to make models reusable for different tasks or different languages.

Starting with the attention mechanism, we will briefly discuss the Transformer architecture and the differences between previous NLP models. In parallel with theoretical discussions, we will show practical examples with the popular NLP framework. For the sake of simplicity, we will choose introductory code examples that are as short as possible.

In this chapter, we will cover the following topics:

  • Evolution of NLP toward Transformers
  • Understanding distributional semantics
  • Leveraging DL
  • Overview of the Transformer architecture
  • Using TL with Transformers

Technical requirements

We will be using Jupyter Notebook to run our coding exercises that require python >=3.6.0, along with the following packages that need to be installed with the pip install command:

  • sklearn
  • nltk==3.5.0
  • gensim==3.8.3
  • fasttext
  • keras>=2.3.0
  • Transformers >=4.00

All notebooks with coding exercises are available at the following GitHub link: https://github.com/PacktPublishing/Advanced-Natural-Language-Processing-with-Transformers/tree/main/CH01.

Check out the following link to see Code in Action Video: https://bit.ly/2UFPuVd

Evolution of NLP toward Transformers

We have seen profound changes in NLP over the last 20 years. During this period, we experienced different paradigms and finally entered a new era dominated mostly by magical Transformer architecture. This architecture did not come out of nowhere. Starting with the help of various neural-based NLP approaches, it gradually evolved to an attention-based encoder-decoder type architecture and still keeps evolving. The architecture and its variants have been successful thanks to the following developments in the last decade:

  • Contextual word embeddings
  • Better subword tokenization algorithms for handling unseen words or rare words
  • Injecting additional memory tokens into sentences, such as Paragraph ID in Doc2vec or a Classification (CLS) token in Bidirectional Encoder Representations from Transformers (BERT)
  • Attention mechanisms, which overcome the problem of forcing input sentences to encode all information into one context vector
  • Multi-head self-attention
  • Positional encoding to case word order
  • Parallelizable architectures that make for faster training and fine-tuning
  • Model compression (distillation, quantization, and so on)
  • TL (cross-lingual, multitask learning)

For many years, we used traditional NLP approaches such as n-gram language models, TF-IDF-based information retrieval models, and one-hot encoded document-term matrices. All these approaches have contributed a lot to the solution of many NLP problems such as sequence classification, language generation, language understanding, and so forth. On the other hand, these traditional NLP methods have their own weaknesses—for instance, falling short in solving the problems of sparsity, unseen words representation, tracking long-term dependencies, and others. In order to cope with these weaknesses, we developed DL-based approaches such as the following:

  • RNNs
  • CNNs
  • FFNNs
  • Several variants of RNNs, CNNs, and FFNNs

In 2013, as a two-layer FFNN word-encoder model, Word2vec, sorted out the dimensionality problem by producing short and dense representations of the words, called word embeddings. This early model managed to produce fast and efficient static word embeddings. It transformed unsupervised textual data into supervised data (self-supervised learning) by either predicting the target word using context or predicting neighbor words based on a sliding window. GloVe, another widely used and popular model, argued that count-based models can be better than neural models. It leverages both global and local statistics of a corpus to learn embeddings based on word-word co-occurrence statistics. It performed well on some syntactic and semantic tasks, as shown in the following screenshot. The screenshot tells us that the embeddings offsets between the terms help to apply vector-oriented reasoning. We can learn the generalization of gender relations, which is a semantic relation from the offset between man and woman (man-> woman). Then, we can arithmetically estimate the vector of actress by adding the vector of the term actor and the offset calculated before. Likewise, we can learn syntactic relations such as word plural forms. For instance, if the vectors of Actor, Actors, and Actress are given, we can estimate the vector of Actresses:

Figure 1.1 – Word embeddings offset for relation extraction

Figure 1.1 – Word embeddings offset for relation extraction

The recurrent and convolutional architectures such as RNN, Long Short-Term Memory (LSTM), and CNN started to be used as encoders and decoders in sequence-to-sequence (seq2seq) problems. The main challenge with these early models was polysemous words. The senses of the words are ignored since a single fixed representation is assigned to each word, which is especially a severe problem for polysemous words and sentence semantics.

The further pioneer neural network models such as Universal Language Model Fine-tuning (ULMFit) and Embeddings from Language Models (ELMo) managed to encode the sentence-level information and finally alleviate polysemy problems, unlike with static word embeddings. These two important approaches were based on LSTM networks. They also introduced the concept of pre-training and fine-tuning. They help us to apply TL, employing the pre-trained models trained on a general task with huge textual datasets. Then, we can easily perform fine-tuning by resuming training of the pre-trained network on a target task with supervision. The representations differ from traditional word embeddings such that each word representation is a function of the entire input sentence. The modern Transformer architecture took advantage of this idea.

In the meantime, the idea of an attention mechanism made a strong impression in the NLP field and achieved significant success, especially in seq2seq problems. Earlier methods would pass the last state (known as a context vector or thought vector) obtained from the entire input sequence to the output sequence without linking or elimination. The attention mechanism was able to build a more sophisticated model by linking the tokens determined from the input sequence to the particular tokens in the output sequence. For instance, suppose you have a keyword phrase Government of Canada in the input sentence for an English to Turkish translation task. In the output sentence, the Kanada Hükümeti token makes strong connections with the input phrase and establishes a weaker connection with the remaining words in the input, as illustrated in the following screenshot:

Figure 1.2 – Sketchy visualization of an attention mechanism

Figure 1.2 – Sketchy visualization of an attention mechanism

So, this mechanism makes models more successful in seq2seq problems such as translation, question answering, and text summarization.

In 2017, the Transformer-based encoder-decoder model was proposed and found to be successful. The design is based on an FFNN by discarding RNN recurrency and using only attention mechanisms (Vaswani et al., All you need is attention, 2017). The Transformer-based models have so far overcome many difficulties that other approaches faced and have become a new paradigm. Throughout this book, you will be exploring and understanding how the Transformer-based models work.

Understanding distributional semantics

Distributional semantics describes the meaning of a word with a vectorial representation, preferably looking at its distributional evidence rather than looking at its predefined dictionary definitions. The theory suggests that words co-occurring together in a similar environment tend to share similar meanings. This was first formulated by the scholar Harris (Distributional Structure Word, 1954). For example, similar words such as dog and cat mostly co-occur in the same context. One of the advantages of a distributional approach is to help the researchers to understand and monitor the semantic evolution of words across time and domains, also known as the lexical semantic change problem.

Traditional approaches have applied Bag-of-Words (BoW) and n-gram language models to build the representation of words and sentences for many years. In a BoW approach, words and documents are represented with a one-hot encoding as a sparse way of representation, also known as the Vector Space Model (VSM).

Text classification, word similarity, semantic relation extraction, word-sense disambiguation, and many other NLP problems have been solved by these one-hot encoding techniques for years. On the other hand, n-gram language models assign probabilities to sequences of words so that we can either compute the probability that a sequence belongs to a corpus or generate a random sequence based on a given corpus.

BoW implementation

A BoW is a representation technique for documents by counting the words in them. The main data structure of the technique is a document-term matrix. Let's see a simple implementation of BoW with Python. The following piece of code illustrates how to build a document-term matrix with the Python sklearn library for a toy corpus of three sentences:

from sklearn.feature_extraction.text import TfidfVectorizer 
import numpy as np
import pandas as pd
toy_corpus= ["the fat cat sat on the mat",
             "the big cat slept",
             "the dog chased a cat"]
vectorizer=TfidfVectorizer() 
corpus_tfidf=vectorizer.fit_transform(toy_corpus)
print(f"The vocabulary size is \
                 {len(vectorizer.vocabulary_.keys())} ")
print(f"The document-term matrix shape is\
                           {corpus_tfidf.shape}")
df=pd.DataFrame(np.round(corpus_tfidf.toarray(),2))
df.columns=vectorizer.get_feature_names()

The output of the code is a document-term matrix, as shown in the following screenshot. The size is (3 x 10), but in a realistic scenario the matrix size can grow to much larger numbers such as 10K x 10M:

Figure 1.3 – Document-term matrix

Figure 1.3 – Document-term matrix

The table indicates a count-based mathematical matrix where the cell values are transformed by a Term Frequency-Inverse Document Frequency (TF-IDF) weighting schema. This approach does not care about the position of words. Since the word order strongly determines the meaning, ignoring it leads to a loss of meaning. This is a common problem in a BoW method, which is finally solved by a recursion mechanism in RNN and positional encoding in Transformers.

Each column in the matrix stands for the vector of a word in the vocabulary, and each row stands for the vector of a document. Semantic similarity metrics can be applied to compute the similarity or dissimilarity of the words as well as documents. Most of the time, we use bigrams such as cat_sat and the_street to enrich the document representation. For instance, as the parameter ngram_range=(1,2) is passed to TfidfVectorizer, it builds a vector space containing both unigrams (big, cat, dog) and bigrams (big_cat, big_dog). Thus, such models are also called bag-of-n-grams, which is a natural extension of BoW.

If a word is commonly used in each document, it can be considered to be high-frequency, such as and the. Conversely, some words hardly appear in documents, called low-frequency (or rare) words. As high-frequency and low-frequency words may prevent the model from working properly, TF-IDF, which is one of the most important and well-known weighting mechanisms, is used here as a solution.

Inverse Document Frequency (IDF) is a statistical weight to measure the importance of a word in a document—for example, while the word the has no discriminative power, chased can be highly informative and give clues about the subject of the text. This is because high-frequency words (stopwords, functional words) have little discriminating power in understanding the documents.

The discriminativeness of the terms also depends on the domain—for instance, a list of DL articles is most likely to have the word network in almost every document. IDF can scale down the weights of all terms by using their Document Frequency (DF), where the DF of a word is computed by the number of documents in which a term appears. Term Frequency (TF) is the raw count of a term (word) in a document.

Some of the advantages and disadvantages of a TF-IDF based BoW model are listed as follows:

Table 1 – Advantages and disadvantages of a TF-IDF BoW model

Table 1 – Advantages and disadvantages of a TF-IDF BoW model

Overcoming the dimensionality problem

To overcome the dimensionality problem of the BoW model, Latent Semantic Analysis (LSA) is widely used for capturing semantics in a low-dimensional space. It is a linear method that captures pairwise correlations between terms. LSA-based probabilistic methods can be still considered as a single layer of hidden topic variables. However, current DL models include multiple hidden layers, with billions of parameters. In addition to that, Transformer-based models showed that they can discover latent representations much better than such traditional models.

For the Natural Language Understanding (NLU) tasks, the traditional pipeline starts with some preparation steps, such as tokenization, stemming, noun phrase detection, chunking, stop-word elimination, and much more. Afterward, a document-term matrix is constructed with any weighting schema, where TF-IDF is the most popular one. Finally, the matrix is served as a tabulated input for Machine Learning (ML) pipelines, sentiment analysis, document similarity, document clustering, or measuring the relevancy score between a query and a document. Likewise, terms are represented as a tabular matrix and can be input for a token classification problem where we can apply named-entity recognition, semantic relation extractions, and so on.

The classification phase includes a straightforward implementation of supervised ML algorithms such as Support Vector Machine (SVM), Random forest, logistic, naive bayes, and Multiple Learners (Boosting or Bagging). Practically, the implementation of such a pipeline can simply be coded as follows:

from sklearn.pipeline import make_pipeline
from sklearn.svm import SVC
labels= [0,1,0]
clf = SVC()
clf.fit(df.to_numpy(), labels)

As seen in the preceding code, we can apply fit operations easily thanks to the sklearn Application Programming Interface (API). In order to apply the learned model to train data, the following code is executed:

clf.predict(df.to_numpy())
Output: array([0, 1, 0])

Let's move on to the next section!

Language modeling and generation

For language-generation problems, the traditional approaches are based on leveraging n-gram language models. This is also called a Markov process, which is a stochastic model in which each word (event) depends on a subset of previous words—unigram, bigram, or n-gram, outlined as follows:

  • Unigram (all words are independent and no chain): This estimates the probability of word in a vocabulary simply computed by the frequency of it to the total word count.
  • Bigram (First-order Markov process): This estimates the P (wordi| wordi-1).probability of wordi depending on wordi-1, which is simply computed by the ratio of P (wordi , wordi-1) to P (wordi-1).
  • Ngram (N-order Markov process): This estimates P (wordi | word0, ..., wordi-1).

Let's give a simple language model implementation with the Natural Language Toolkit (NLTK) library. In the following implementation, we train a Maximum Likelihood Estimator (MLE) with order n=2. We can select any n-gram order such as n=1 for unigrams, n=2 for bigrams, n=3 for trigrams, and so forth:

import nltk
from nltk.corpus import gutenberg
from nltk.lm import MLE
from nltk.lm.preprocessing import padded_everygram_pipeline
nltk.download('gutenberg')
nltk.download('punkt')
macbeth = gutenberg.sents('shakespeare-macbeth.txt')
model, vocab = padded_everygram_pipeline(2, macbeth)
lm=MLE(2)
lm.fit(model,vocab)
print(list(lm.vocab)[:10])
print(f"The number of words is {len(lm.vocab)}")

The nltk package first downloads the gutenberg corpus, which includes some texts from the Project Gutenberg electronic text archive, hosted at https://www.gutenberg.org. It also downloads the punkt tokenizer tool for the punctuation process. This tokenizer divides a raw text into a list of sentences by using an unsupervised algorithm. The nltk package already includes a pre-trained English punkt tokenizer model for abbreviation words and collocations. It can be trained on a list of texts in any language before use. In the further chapters, we will discuss how to train different and more efficient tokenizers for Transformer models as well. The following code produces what the language model learned so far:

print(f"The frequency of the term 'Macbeth' is {lm.counts['Macbeth']}")
print(f"The language model probability score of 'Macbeth' is {lm.score('Macbeth')}")
print(f"The number of times 'Macbeth' follows 'Enter' is {lm.counts[['Enter']]['Macbeth']} ")
print(f"P(Macbeth | Enter) is {lm.score('Macbeth', ['Enter'])}")
print(f"P(shaking | for) is {lm.score('shaking', ['for'])}")

This is the output:

The frequency of the term 'Macbeth' is 61
The language model probability score of 'Macbeth' is 0.00226
The number of times 'Macbeth' follows 'Enter' is 15 
P(Macbeth | Enter) is 0.1875
P(shaking | for) is 0.0121

The n-gram language model keeps n-gram counts and computes the conditional probability for sentence generation. lm=MLE(2) stands for MLE, which yields the maximum probable sentence from each token probability. The following code produces a random sentence of 10 words with the <s> starting condition given:

lm.generate(10, text_seed=['<s>'], random_seed=42)

The output is shown in the following snippet:

['My', 'Bosome', 'franchis', "'", 's', 'of', 'time', ',', 'We', 'are']

We can give a specific starting condition through the text_seed parameter, which makes the generation be conditioned on the preceding context. In our preceding example, the preceding context is <s>, which is a special token indicating the beginning of a sentence.

So far, we have discussed paradigms underlying traditional NLP models and provided very simple implementations with popular frameworks. We are now moving to the DL section to discuss how neural language models shaped the field of NLP and how neural models overcome the traditional model limitations.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Explore quick prototyping with up-to-date Python libraries to create effective solutions to industrial problems
  • Solve advanced NLP problems such as named-entity recognition, information extraction, language generation, and conversational AI
  • Monitor your model's performance with the help of BertViz, exBERT, and TensorBoard

Description

Transformer-based language models have dominated natural language processing (NLP) studies and have now become a new paradigm. With this book, you'll learn how to build various transformer-based NLP applications using the Python Transformers library. The book gives you an introduction to Transformers by showing you how to write your first hello-world program. You'll then learn how a tokenizer works and how to train your own tokenizer. As you advance, you'll explore the architecture of autoencoding models, such as BERT, and autoregressive models, such as GPT. You'll see how to train and fine-tune models for a variety of natural language understanding (NLU) and natural language generation (NLG) problems, including text classification, token classification, and text representation. This book also helps you to learn efficient models for challenging problems, such as long-context NLP tasks with limited computational capacity. You'll also work with multilingual and cross-lingual problems, optimize models by monitoring their performance, and discover how to deconstruct these models for interpretability and explainability. Finally, you'll be able to deploy your transformer models in a production environment. By the end of this NLP book, you'll have learned how to use Transformers to solve advanced NLP problems using advanced models.

Who is this book for?

This book is for deep learning researchers, hands-on NLP practitioners, as well as ML/NLP educators and students who want to start their journey with Transformers. Beginner-level machine learning knowledge and a good command of Python will help you get the best out of this book.

What you will learn

  • Explore state-of-the-art NLP solutions with the Transformers library
  • Train a language model in any language with any transformer architecture
  • Fine-tune a pre-trained language model to perform several downstream tasks
  • Select the right framework for the training, evaluation, and production of an end-to-end solution
  • Get hands-on experience in using TensorBoard and Weights & Biases
  • Visualize the internal representation of transformer models for interpretability

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 15, 2021
Length: 374 pages
Edition : 1st
Language : English
ISBN-13 : 9781801077651
Category :
Languages :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Sep 15, 2021
Length: 374 pages
Edition : 1st
Language : English
ISBN-13 : 9781801077651
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just Mex$85 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just Mex$85 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total Mex$ 4,103.97
Transformers for Natural Language Processing
Mex$1845.99
Machine Learning for Time-Series with Python
Mex$1128.99
Mastering Transformers
Mex$1128.99
Total Mex$ 4,103.97 Stars icon

Table of Contents

15 Chapters
Section 1: Introduction – Recent Developments in the Field, Installations, and Hello World Applications Chevron down icon Chevron up icon
Chapter 1: From Bag-of-Words to the Transformer Chevron down icon Chevron up icon
Chapter 2: A Hands-On Introduction to the Subject Chevron down icon Chevron up icon
Section 2: Transformer Models – From Autoencoding to Autoregressive Models Chevron down icon Chevron up icon
Chapter 3: Autoencoding Language Models Chevron down icon Chevron up icon
Chapter 4:Autoregressive and Other Language Models Chevron down icon Chevron up icon
Chapter 5: Fine-Tuning Language Models for Text Classification Chevron down icon Chevron up icon
Chapter 6: Fine-Tuning Language Models for Token Classification Chevron down icon Chevron up icon
Chapter 7: Text Representation Chevron down icon Chevron up icon
Section 3: Advanced Topics Chevron down icon Chevron up icon
Chapter 8: Working with Efficient Transformers Chevron down icon Chevron up icon
Chapter 9:Cross-Lingual and Multilingual Language Modeling Chevron down icon Chevron up icon
Chapter 10: Serving Transformer Models Chevron down icon Chevron up icon
Chapter 11: Attention Visualization and Experiment Tracking Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
(9 Ratings)
5 star 33.3%
4 star 55.6%
3 star 0%
2 star 0%
1 star 11.1%
Filter icon Filter
Top Reviews

Filter reviews by




rockstar82 Nov 14, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The first part of the book is a long introduction (around 80 pages). The book starts with a review and some code related to the early concepts that led to Transformers (e.g., PCA, bag-of-words, embeddings, RNNs, LSTMs) and jumps right into the attention mechanism and the Transformer architecture in several pages. The architecture is explained through various pictures and visualizations in order to help people understand why this model works so well. The few references included at the end of various chapters can always provide the additional details. While the first chapter is mostly theoretical, the following chapter does contain the gritty details related to progamming - from installing the various libraries to various tricks needed in order to make certain models work for solving a particular problem. The author's habits of adding good images everywhere helps a lot in this endeavour.The second part of the book is focused on two big issues: a) understanding the architectural differences between autoencoding models (e.g., BERT) and autoregressive models (e.g., BERT); and b) applying such models for classification tasks (e.g., text or token classification). This is the part in which the authors teach you how to fine-tune such models.The last part of the book is dedicated to advanced topics like efficient Transforms, multilingual models or production-related issues (e.g., serving Transformers, visualizing their attention layers or outputs, etc.). This is the part that links directly to advanced research, not unlike the kind of research that was submitted to top conferences in the field a couple of years ago (e.g., up to 2019, there were hardly any Transformer visualizations!).I particularly liked the ease with which the authors jumped straight into a difficult subject and teached it up to research level. There are very few books able to do this (e.g., thinking of the Dive into Deep Learning books or Grokking Deep Learning). It is great to see publishers introducing such books extremely fast (e.g., it took 5 years to have the first D3.js book published, but only 2 years to have a book about BERT or GPT-2/3). Make no mistake, while the title is still about Transformers, the real stars are the newer models discussed here. I consider this a five star book and I would recommend you to read it along the various examples of Transformer / BERT / GPT-2 explained source code that are available online. You will thank yourself later!
Amazon Verified review Amazon
Kelvin D. Meeks Sep 15, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
A few other key words that come to mind to describe this book: Foundational, Hands-on, Practical, Crisp, Concise, Depth & Breadth, Tremendous Value.With the continued accelerating explosion in the growth of unstructured data collected by enterprises in texts and documents – the need to be able to analyze and derive meaningful information is more critical than ever – and will be the competitive advantage that distinguishes future winners from losers in the marketplace of solutions. This book is an investment in expanding your awareness of the techniques and capabilities that will help you navigate those challenges.From the book: “Transformer models have gained immense interest because of their effectiveness in all NLP tasks, from text classification to text generation….[and] effectively improve the performance of multilingual and multi-task NLP problems, as well as monolingual and single tasks.”This book is a practical guide to leveraging (and applying) some of the leading-edge concepts, algorithms, and libraries from the fields of Deep Learning (DL) and Natural Language Processing (NLP) to solve real-world problems – ranging from summarization to question-answering.In particular, this book will serve as a gentle guided tour of some of the important advances that have occurred (and continue to evolve) as the transformer architecture gradually evolved into an attention-based encoder-decoder architecture.What I particularly liked:The deep subject-matter experience and credentials of the authors (“Savaş Yıldırım graduated from the Istanbul Technical University Department of Computer Engineering and holds a Ph.D. degree in Natural Language Processing (NLP). Currently, he is an associate professor at the Istanbul Bilgi University, Turkey, and is a visiting researcher at the Ryerson University, Canada. He is a proactive lecturer and researcher with more than 20 years of experience teaching courses on machine learning, deep learning, and NLP.”, “Meysam Asgari-Chenaghlu is an AI manager at Carbon Consulting and is also a Ph.D. candidate at the University of Tabriz.”)The companion “Code In Action” YouTube channel playlist for the book, and the GitHub repository with code examples.The excellent quality/conciseness/crispness of the writing.The extensive citation of relevant research papers – and references at the end of chapters.The authors’ deep practical knowledge – and discussions – of the advantages and disadvantages of different approaches.The exquisitely balanced need for technical depth in the details covered by a given chapter – with the need to maintain a steady pace of educating & keeping the reader engaged. Some books go too deep, and some stay too shallow. This book is exceptionally well balanced at just the right depth.The exceptional variety of examples covered.The quality of the illustrations used to convey complex concepts – Figures 1.19, 3.2, 3.3, 7.8, 9.3 are just a few examples of the many good diagrams.Chapter-1’s focus on getting the reader immediately involved in executing a hello-world example with Transformers. The overview of RNNs, FFNNs, LSTMs, and CNNs. An excellent overview of the developments in NLP over the last 10 years that led to the Tranformer architecture.Chapter-2’s guidance on installing the required software – and the suggestion of Google Colab as an alternative to Anaconda.Chapter-2’s coverage of community-provided models, benchmarks, TensorFlow, PyTorch, and Transformer - and running a simple Transformer from scratch.Chapter-3’s coverage of BERT – as well as ALBERT, RoBERTa, and ELECTRA.Chapter-4’s coverage of AR, GPT, BART, and NLG.Chapter-5’s coverage of fine-tuning language models for text classification (e.g., for sentiment analysis, or multi-class classification).Chapter-6’s coverage of NER and POS was of particular interest – given the effort that I had to expend last year doing my own deep-dive to prepare some recommendations for a client – I wish I had had this book then.Chapter-7’s coverage of USE and SBERT, zero-shot learning with BART, and FLAIR.Chapter-8’s discussion of efficient sparse parsers (Linformer, and BigBird) – as well as the techniques of distillation, pruning, and quantization – to make efficient models out of trained models. Chapter-8 may well be worth the price of the book, itself.Chapter-9’s coverage of multilingual and cross-lingual language model training (and pretraining). I found the discussion of “Cross-lingual similarity tasks” (see p-278) to be particularly interesting.Chapter-10’s coverage of Locust for load testing, fastAPI, and TensorFlow Extended (TFX) – as well as the serving of solutions in environments where CPU/GPU is available.Chapter-11’s coverage of visualization with exBERT and BertViz – as well as the discussion on tracking model training with TensorBoard and W&BThe ”Other Books You May Enjoy” section at the end of the book (“Getting Started with Google BERT”, and “Mastering spaCy”)Suggestions for the next edition:The fonts used for the text in some figures (e.g., 3.8, 3.10, 3.12, 3.13, 3.14, 4.5, 4.6, 6.2, 6.7, 8.4, 8.6, 9.4, 9.5 ) appear to be a bit fuzzy in the PDF version of the book. Compare those with the clarity of figure 6.6.
Amazon Verified review Amazon
Colbert Philippe Jun 29, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I love that book! It gives me the introduction and understanding on the very important topic that is Transformers. The book gives a bit of history and the current standard implementation of transformers. The book also delves in the advanced topic of Attention algorithms. This book is a "must have" for the seasoned AI partitioner of Language Model Transformers.
Amazon Verified review Amazon
Daniel Eduardo Portugal Revilla Sep 15, 2021
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Currently, I have more experience as a data engineer but Machine Learning(ML) and Deep Learning(DL) are not some of my strangers. However, I have an attraction for NLP so that I had the opportunity to read the Mastering Transformers.This is a fantastic book all in one about Transformers. Transformers are the principal NLP state of the art. Very futuristic. The principal features of the book are to give good examples step by step of which I can mention BERT the most popular Transformer model by Google, another nice feature is that you can learn the origins of NLP, the traditional techniques, ML and DL evolution.You can find information about the principal BERT variants like ALBERT, ELECTRA, RoBERTa. I missed maybe some examples or information about BioBERT and ClinicalBERT for medical domains and medical apps.Finally, there is one of my favorites topics about ML. How to deploy and serving your model to production for real or experimental APPs.
Amazon Verified review Amazon
Imokhai Sep 23, 2021
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
The book is quite explanatory and has some good concept. I will recommend for those in the data space
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.