Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with TensorFlow 2 and Keras

You're reading from   Deep Learning with TensorFlow 2 and Keras Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API

Arrow left icon
Product type Paperback
Published in Dec 2019
Publisher Packt
ISBN-13 9781838823412
Length 646 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Dr. Amita Kapoor Dr. Amita Kapoor
Author Profile Icon Dr. Amita Kapoor
Dr. Amita Kapoor
Sujit Pal Sujit Pal
Author Profile Icon Sujit Pal
Sujit Pal
Antonio Gulli Antonio Gulli
Author Profile Icon Antonio Gulli
Antonio Gulli
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Neural Network Foundations with TensorFlow 2.0 2. TensorFlow 1.x and 2.x FREE CHAPTER 3. Regression 4. Convolutional Neural Networks 5. Advanced Convolutional Neural Networks 6. Generative Adversarial Networks 7. Word Embeddings 8. Recurrent Neural Networks 9. Autoencoders 10. Unsupervised Learning 11. Reinforcement Learning 12. TensorFlow and Cloud 13. TensorFlow for Mobile and IoT and TensorFlow.js 14. An introduction to AutoML 15. The Math Behind Deep Learning 16. Tensor Processing Unit 17. Other Books You May Enjoy
18. Index

History

The basics of continuous backpropagation were proposed by Henry J. Kelley [1] in 1960 using dynamic programming. Stuart Dreyfus proposed using the chain rule in 1962 [2]. Paul Werbos was the first proposing to use backpropagation for neural nets in his 1974 PhD Thesis [3]. However, it was only in 1986 that backpropagation gained success with the work of David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams published in Nature [4]. Only in 1987, Yan LeCun described the modern version of backpropagation currently used for training neural networks [5].

The basic intuition of SGD was introduced by Robbins and Monro in 1951 in a context different from neural networks [6]. Only in 2012 – or 52 years after the first time backpropagation was first introduced – AlexNet [7] achieved a top-5 error of 15.3% in the ImageNet 2012 Challenge using GPUs. According to The Economist [8], "Suddenly people started to pay attention, not just within the AI community...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image