In this chapter, we began with understanding RNNs and how they enable us to capture sequential dependencies in data. We made an effort to understand the problem of the RNN in terms of it not being able to capture long-term dependencies because of vanishing and exploding gradient issues. We also looked at various forms an RNN can take, depending on the type of problem it is being used to solve. We followed that up with a brief discussion on some variants of RNNs by talking about bidirectional and deep RNNs. We went a step further next and looked at how the vanishing and exploding gradient problem can be solved by adding memory to the network and, as a result, we had an expansive discussion on LSTM, which is a variant of an RNN, using the concept of a memory state. We tried to solve the problem of text generation, where we used LSTMs to generate text for describing hotels in the city of Mumbai. Finally, we had a brief discussion on other memory variants of an RNN, including GRUs...
United States
Great Britain
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Singapore
Hungary
Philippines
Mexico
Thailand
Ukraine
Luxembourg
Estonia
Lithuania
Norway
Chile
South Korea
Ecuador
Colombia
Taiwan
Switzerland
Indonesia
Cyprus
Denmark
Finland
Poland
Malta
Czechia
New Zealand
Austria
Turkey
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Malaysia
South Africa
Netherlands
Bulgaria
Latvia
Japan
Slovakia