In this chapter, we began with understanding RNNs and how they enable us to capture sequential dependencies in data. We made an effort to understand the problem of the RNN in terms of it not being able to capture long-term dependencies because of vanishing and exploding gradient issues. We also looked at various forms an RNN can take, depending on the type of problem it is being used to solve. We followed that up with a brief discussion on some variants of RNNs by talking about bidirectional and deep RNNs. We went a step further next and looked at how the vanishing and exploding gradient problem can be solved by adding memory to the network and, as a result, we had an expansive discussion on LSTM, which is a variant of an RNN, using the concept of a memory state. We tried to solve the problem of text generation, where we used LSTMs to generate text for describing hotels in the city of Mumbai. Finally, we had a brief discussion on other memory variants of an RNN, including GRUs...
Germany
Slovakia
Canada
Brazil
Singapore
Hungary
Philippines
Mexico
Thailand
Ukraine
Luxembourg
Estonia
Lithuania
Norway
Chile
United States
Great Britain
India
Spain
South Korea
Ecuador
Colombia
Taiwan
Switzerland
Indonesia
Cyprus
Denmark
Finland
Poland
Malta
Czechia
New Zealand
Austria
Turkey
France
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Malaysia
South Africa
Netherlands
Bulgaria
Latvia
Australia
Japan
Russia