Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Keras Deep Learning Cookbook

You're reading from  Keras Deep Learning Cookbook

Product type Book
Published in Oct 2018
Publisher Packt
ISBN-13 9781788621755
Pages 252 pages
Edition 1st Edition
Languages
Authors (3):
Rajdeep Dua Rajdeep Dua
Profile icon Rajdeep Dua
Sujit Pal Sujit Pal
Profile icon Sujit Pal
Manpreet Singh Ghotra Manpreet Singh Ghotra
Profile icon Manpreet Singh Ghotra
View More author details
Toc

Table of Contents (17) Chapters close

Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
1. Keras Installation 2. Working with Keras Datasets and Models 3. Data Preprocessing, Optimization, and Visualization 4. Classification Using Different Keras Layers 5. Implementing Convolutional Neural Networks 6. Generative Adversarial Networks 7. Recurrent Neural Networks 8. Natural Language Processing Using Keras Models 9. Text Summarization Using Keras Models 10. Reinforcement Learning 1. Other Books You May Enjoy Index

Word embedding


Word embedding is an NLP technique for representing words and documents using a dense vector representation compared to the bag of word techniques, which used a large sparse vector representation. Embeddings are a class of NLP methods that aim to project the semantic meaning of words into a geometric space. This is accomplished by linking a numeric vector to each word in a dictionary so that the distance between any two vectors captures the part of the semantic relationship between the two associated words. The geometric space formed by these vectors is called an embedding space.

The two most popular techniques for learning word embeddings are global vectors for word representation (GloVe) and word to vector representation (Word2vec).

In the following sections, we will be processing sample documents through the neural network with and without the embedding layer.

Getting ready

In the first case, we will not use any pre-trained word embeddings from Keras. Keras provides an embedding...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at ₹800/month. Cancel anytime