Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Deep Learning with R

You're reading from   Hands-On Deep Learning with R A practical guide to designing, building, and improving neural network models using R

Arrow left icon
Product type Paperback
Published in Apr 2020
Publisher Packt
ISBN-13 9781788996839
Length 330 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Rodger Devine Rodger Devine
Author Profile Icon Rodger Devine
Rodger Devine
Michael Pawlus Michael Pawlus
Author Profile Icon Michael Pawlus
Michael Pawlus
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Section 1: Deep Learning Basics
2. Machine Learning Basics FREE CHAPTER 3. Setting Up R for Deep Learning 4. Artificial Neural Networks 5. Section 2: Deep Learning Applications
6. CNNs for Image Recognition 7. Multilayer Perceptron for Signal Detection 8. Neural Collaborative Filtering Using Embeddings 9. Deep Learning for Natural Language Processing 10. Long Short-Term Memory Networks for Stock Forecasting 11. Generative Adversarial Networks for Faces 12. Section 3: Reinforcement Learning
13. Reinforcement Learning for Gaming 14. Deep Q-Learning for Maze Solving 15. Other Books You May Enjoy

Choosing the most appropriate activation function

Using keras, you can use a number of different activation functions. Some of these have been discussed in previous chapters; however, there are some that have not been previously covered. We can begin by listing the ones we have already covered with a quick note on each function:

  • Linear: Also known as the identity function. Uses the value of x.
  • Sigmoid: Uses 1 divided by 1 plus the exponent of negative x.
  • Hyperbolic tangent (tanh): Uses the exponent of x minus the exponent of negative x divided by x plus the exponent of negative x. This has the same shape as the sigmoid function; however, the range along the y-axis goes from 1 to -1 instead of from 1 to 0.
  • Rectified Linear Units (ReLU): Uses the value of x if x is greater than 0; otherwise, it assigns a value of 0 if x is less than or equal to 0.
  • Leaky ReLU: Uses the same formula...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image