Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Neural Networks with R

You're reading from   Neural Networks with R Build smart systems by implementing popular deep learning models in R

Arrow left icon
Product type Paperback
Published in Sep 2017
Publisher Packt
ISBN-13 9781788397872
Length 270 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Balaji Venkateswaran Balaji Venkateswaran
Author Profile Icon Balaji Venkateswaran
Balaji Venkateswaran
Giuseppe Ciaburro Giuseppe Ciaburro
Author Profile Icon Giuseppe Ciaburro
Giuseppe Ciaburro
Arrow right icon
View More author details
Toc

Table of Contents (8) Chapters Close

Preface 1. Neural Network and Artificial Intelligence Concepts FREE CHAPTER 2. Learning Process in Neural Networks 3. Deep Learning Using Multilayer Neural Networks 4. Perceptron Neural Network Modeling – Basic Models 5. Training and Visualizing a Neural Network in R 6. Recurrent and Convolutional Neural Networks 7. Use Cases of Neural Networks – Advanced Topics

Which activation functions to use?

Given that neural networks are to support nonlinearity and more complexity, the activation function to be used has to be robust enough to have the following:

  • It should be differential; we will see why we need differentiation in backpropagation. It should not cause gradients to vanish.
  • It should be simple and fast in processing.
  • It should not be zero centered.

The sigmoid is the most used activation function, but it suffers from the following setbacks:

  • Since it uses logistic model, the computations are time consuming and complex
  • It cause gradients to vanish and no signals pass through the neurons at some point of time
  • It is slow in convergence
  • It is not zero centered

These drawbacks are solved by ReLU. ReLU is simple and is faster to process. It does not have the vanishing gradient problem and has shown vast improvements compared to the sigmoid and tanh functions. ReLU is the most preferred activation function for neural networks and DL problems.

ReLU is used for hidden layers, while the output layer can use a softmax function for logistic problems and a linear function of regression problems.

You have been reading a chapter from
Neural Networks with R
Published in: Sep 2017
Publisher: Packt
ISBN-13: 9781788397872
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image