Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Natural Language Processing with TensorFlow

You're reading from   Natural Language Processing with TensorFlow The definitive NLP book to implement the most sought-after machine learning models and tasks

Arrow left icon
Product type Paperback
Published in Jul 2022
Publisher Packt
ISBN-13 9781838641351
Length 514 pages
Edition 2nd Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Thushan Ganegedara Thushan Ganegedara
Author Profile Icon Thushan Ganegedara
Thushan Ganegedara
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Introduction to Natural Language Processing FREE CHAPTER 2. Understanding TensorFlow 2 3. Word2vec – Learning Word Embeddings 4. Advanced Word Vector Algorithms 5. Sentence Classification with Convolutional Neural Networks 6. Recurrent Neural Networks 7. Understanding Long Short-Term Memory Networks 8. Applications of LSTM – Generating Text 9. Sequence-to-Sequence Learning – Neural Machine Translation 10. Transformers 11. Image Captioning with Transformers 12. Other Books You May Enjoy
13. Index
Appendix A: Mathematical Foundations and Advanced TensorFlow

Transformer architecture

A Transformer is a type of Seq2Seq model (discussed in the previous chapter). Transformer models can work with both image and text data. The Transformer model takes in a sequence of inputs and maps that to a sequence of outputs.

The Transformer model was initially proposed in the paper Attention is all you need by Vaswani et al. (https://arxiv.org/pdf/1706.03762.pdf). Just like a Seq2Seq model, the Transformer consists of an encoder and a decoder (Figure 10.1):

Intuition behind NMT

Figure 10.1: The encoder-decoder architecture

Let’s understand how the Transformer model works using the previously studied Machine Translation task. The encoder takes in a sequence of source language tokens and produces a sequence of interim outputs. Then the decoder takes in a sequence of target language tokens and predicts the next token for each time step (the teacher forcing technique). Both the encoder and the decoder use attention mechanisms to improve performance. For...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image