Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning Quick Reference

You're reading from   Deep Learning Quick Reference Useful hacks for training and optimizing deep neural networks with TensorFlow and Keras

Arrow left icon
Product type Paperback
Published in Mar 2018
Publisher Packt
ISBN-13 9781788837996
Length 272 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Mike Bernico Mike Bernico
Author Profile Icon Mike Bernico
Mike Bernico
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. The Building Blocks of Deep Learning FREE CHAPTER 2. Using Deep Learning to Solve Regression Problems 3. Monitoring Network Training Using TensorBoard 4. Using Deep Learning to Solve Binary Classification Problems 5. Using Keras to Solve Multiclass Classification Problems 6. Hyperparameter Optimization 7. Training a CNN from Scratch 8. Transfer Learning with Pretrained CNNs 9. Training an RNN from scratch 10. Training LSTMs with Word Embeddings from Scratch 11. Training Seq2Seq Models 12. Using Deep Reinforcement Learning 13. Generative Adversarial Networks 14. Other Books You May Enjoy

Keras embedding layer

The Keras embedding layer allows us to learn a vector space representation of an input word, like we did in word2vec, as we train our model. Using the functional API, the Keras embedding layer is always the second layer in the network, coming after the input layer.

The embedding layer needs the following three arguments:

  • input_dim: The size of the vocabulary of the corpus.
  • output_dim: The size of the vector space we want to learn. This would correspond to the number of neurons in word2vec hidden layer.
  • input_length: The number of words in the text we're going to use in each observation. In the examples that follow, we will use a fixed size based on the longest text we need to send and we will pad smaller documents with 0s.

An embedding layer will output a 2D matrix for each input document that contains one vector for each word in the sequence specified...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime