Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Natural Language Processing with PyTorch 1.x

You're reading from   Hands-On Natural Language Processing with PyTorch 1.x Build smart, AI-driven linguistic applications using deep learning and NLP techniques

Arrow left icon
Product type Paperback
Published in Jul 2020
Publisher Packt
ISBN-13 9781789802740
Length 276 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Thomas Dop Thomas Dop
Author Profile Icon Thomas Dop
Thomas Dop
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Section 1: Essentials of PyTorch 1.x for NLP
2. Chapter 1: Fundamentals of Machine Learning and Deep Learning FREE CHAPTER 3. Chapter 2: Getting Started with PyTorch 1.x for NLP 4. Section 2: Fundamentals of Natural Language Processing
5. Chapter 3: NLP and Text Embeddings 6. Chapter 4: Text Preprocessing, Stemming, and Lemmatization 7. Section 3: Real-World NLP Applications Using PyTorch 1.x
8. Chapter 5: Recurrent Neural Networks and Sentiment Analysis 9. Chapter 6: Convolutional Neural Networks for Text Classification 10. Chapter 7: Text Translation Using Sequence-to-Sequence Neural Networks 11. Chapter 8: Building a Chatbot Using Attention-Based Neural Networks 12. Chapter 9: The Road Ahead 13. Other Books You May Enjoy

Chapter 5: Recurrent Neural Networks and Sentiment Analysis

In this chapter, we will look at Recurrent Neural Networks (RNNs), a variation of the basic feed forward neural networks in PyTorch that we learned how to build in Chapter 1, Fundamentals of Machine Learning. Generally, RNNs can be used for any task where data can be represented as a sequence. This includes things such as stock price prediction, using a time series of historic data represented as a sequence. We commonly use RNNs in NLP as text can be thought of as a sequence of individual words and can be modeled as such. While a conventional neural network takes a single vector as input to the model, an RNN can take a whole sequence of vectors. If we represent each word in a document as a vector embedding, we can represent a whole document as a sequence of vectors (or an order 3 tensor). We can then use RNNs (and a more sophisticated form of RNN known as Long Short-Term Memory (LSTM) to learn from our data.

In this chapter...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image