Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Natural Language Processing with Java

You're reading from   Natural Language Processing with Java Techniques for building machine learning and neural network models for NLP

Arrow left icon
Product type Paperback
Published in Jul 2018
Publisher
ISBN-13 9781788993494
Length 318 pages
Edition 2nd Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Ashish Bhatia Ashish Bhatia
Author Profile Icon Ashish Bhatia
Ashish Bhatia
Richard M. Reese Richard M. Reese
Author Profile Icon Richard M. Reese
Richard M. Reese
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Introduction to NLP FREE CHAPTER 2. Finding Parts of Text 3. Finding Sentences 4. Finding People and Things 5. Detecting Part of Speech 6. Representing Text with Features 7. Information Retrieval 8. Classifying Texts and Documents 9. Topic Modeling 10. Using Parsers to Extract Relationships 11. Combined Pipeline 12. Creating a Chatbot 13. Other Books You May Enjoy

Dimensionality reduction


Word embedding is now a basic building block for natural language processing. GloVe, or word2vec, or any other form of word embedding will generate a two-dimensional matrix, but it is stored in one-dimensional vectors. Dimensonality here refers to the size of these vectors, which is not the same as the size of the vocabulary. The following diagram is taken from https://nlp.stanford.edu/projects/glove/ and shows vocabulary versus vector dimensions:

The other issue with large dimensions is the memory required to use word embeddings in the real world; simple 300 dimensional vectors with more than a million tokens will take 6 GB or more of memory to process. Using such a lot of memory is not practical in real-world NLP use cases. The best way is to reduce the number of dimensions to decrease the size. t-Distributed Stochastic Neighbor Embedding (t-SNE) and principal component analysis (PCA) are two common approaches  used to achieve dimensionality reduction. In the next...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image