Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
The Deep Learning Workshop

You're reading from   The Deep Learning Workshop Learn the skills you need to develop your own next-generation deep learning models with TensorFlow and Keras

Arrow left icon
Product type Paperback
Published in Jul 2020
Publisher Packt
ISBN-13 9781839219856
Length 474 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (5):
Arrow left icon
Nipun Sadvilkar Nipun Sadvilkar
Author Profile Icon Nipun Sadvilkar
Nipun Sadvilkar
Thomas Joseph Thomas Joseph
Author Profile Icon Thomas Joseph
Thomas Joseph
Anthony So Anthony So
Author Profile Icon Anthony So
Anthony So
Mohan Kumar Silaparasetty Mohan Kumar Silaparasetty
Author Profile Icon Mohan Kumar Silaparasetty
Mohan Kumar Silaparasetty
Mirza Rahim Baig Mirza Rahim Baig
Author Profile Icon Mirza Rahim Baig
Mirza Rahim Baig
+1 more Show less
Arrow right icon
View More author details
Toc

Table of Contents (9) Chapters Close

Preface
1. Building Blocks of Deep Learning 2. Neural Networks FREE CHAPTER 3. Image Classification with Convolutional Neural Networks (CNNs) 4. Deep Learning for Text – Embeddings 5. Deep Learning for Sequences 6. LSTMs, GRUs, and Advanced RNNs 7. Generative Adversarial Networks Appendix

Parameters in an LSTM

LSTMs are built on plain RNNs. If you simplified the LSTM and removed all the gates, retaining only the tanh function for the hidden state update, you would have a plain RNN. The number of activations that the information – the new input data at time t and the previous hidden state at time t-1 (xt and ht-1) – passes through in an LSTM is four times the number that it passes through in a plain RNN. The activations are applied once in the forget gate, twice in the update gate, and once in the output gate. The number of weights/parameters in an LSTM is, therefore, four times the number of parameters in a plain RNN.

In Chapter 5, Deep Learning For Sequences, in the section titled Parameters in an RNN, we calculated the number of parameters in a plain RNN and saw that we already have a quite a few parameters to work with (n2 + nk + nm, where n is the number of neurons in the hidden layer, m is the number of inputs, and k is the dimension of the output...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image