A brief introduction to Keras
Keras is a high-level neural network API that can run on top of TensorFlow, a library for dataflow programming. What this means is that it can run the operations needed for a neural network in a highly optimized way. Therefore, it's much faster and easier to use than TensorFlow. Because Keras acts as an interface to TensorFlow, it makes it easier to build even more complex neural networks. Throughout the rest of the book, we will be working with the Keras library in order to build our neural networks.
Importing Keras
When importing Keras, we usually just import the modules we will use. In this case, we need two types of layers:
The
Dense
layer is the plain layer that we have gotten to know in this chapterThe
Activation
layer allows us to add an activation function
We can import them simply by running the following code:
from keras.layers import Dense, Activation
Keras offers two ways to build models, through the sequential and the functional APIs. Because the sequential API is easier to use and allows more rapid model building, we will be using it for most of the book. However, in later chapters, we will take a look at the functional API as well.
We can access the sequential API through this code:
from keras.models import Sequential
A two-layer model in Keras
Building a neural network in the sequential API works as follows.
Stacking layers
Firstly, we create an empty sequential model with no layers:
model = Sequential()
Then we can add layers to this model, just like stacking a layer cake, with model.add()
.
For the first layer, we have to specify the input dimensions of the layer. In our case, the data has two features, the coordinates of the point. We can add a hidden layer of size 3 with the following code:
model.add(Dense(3,input_dim=2))
Note how we nest the functions inside model.add()
. We specify the Dense
layer, and the positional argument is the size of the layer. This Dense
layer now only does the linear step.
To add a tanh
activation function, we call the following:
model.add(Activation('tanh'))
Then, we add the linear step and the activation function of the output layer in the same way, by calling up:
model.add(Dense(1)) model.add(Activation('sigmoid'))
Then to get an overview of all the layers we now have in our model, we can use the following command:
model.summary()
This yields the following overview of the model:
out: Layer (type) Output Shape Param # ================================================================= dense_3 (Dense) (None, 3) 9 _________________________________________________________________ activation_3 (Activation) (None, 3) 0 _________________________________________________________________ dense_4 (Dense) (None, 1) 4 _________________________________________________________________ activation_4 (Activation) (None, 1) 0 ================================================================= Total params: 13 Trainable params: 13 Non-trainable params: 0
You can see the layers listed nicely, including their output shape and the number of parameters the layer has. None
, located within the output shape, means that the layer has no fixed input size in that dimension and will accept whatever we feed it. In our case, it means the layer will accept any number of samples.
In pretty much every network, you will see that the input dimension on the first dimension is variable like this in order to accommodate the different amounts of samples.
Compiling the model
Before we can start training the model, we have to specify how exactly we want to train the model; and, more importantly, we need to specify which optimizer and which loss function we want to use.
The simple optimizer we have used so far is called the Stochastic Gradient Descent, or SGD. To look at more optimizers, see Chapter 2, Applying Machine Learning to Structured Data.
The loss function we use for this binary classification problem is called binary cross-entropy. We can also specify what metrics we want to track during training. In our case, accuracy, or just acc
to keep it short, would be interesting to track:
model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['acc'])
Training the model
Now we are ready to run the training process, which we can do with the following line:
history = model.fit(X,y,epochs=900)
This will train the model for 900 iterations, which are also referred to as epochs. The output should look similar to this:
Epoch 1/900 200/200 [==============================] - 0s 543us/step - loss: 0.6840 - acc: 0.5900 Epoch 2/900 200/200 [==============================] - 0s 60us/step - loss: 0.6757 - acc: 0.5950 ... Epoch 899/900 200/200 [==============================] - 0s 90us/step - loss: 0.2900 - acc: 0.8800 Epoch 900/900 200/200 [==============================] - 0s 87us/step - loss: 0.2901 - acc: 0.8800
The full output of the training process has been truncated in the middle, this is to save space in the book, but you can see that the loss goes continuously down while accuracy goes up. In other words, success!
Over the course of this book, we will be adding more bells and whistles to these methods. But at this moment, we have a pretty solid understanding of the theory of deep learning. We are just missing one building block: how does Keras actually work under the hood? What is TensorFlow? And why does deep learning work faster on a GPU?
We will be answering these questions in the next, and final, section of this chapter.
Keras and TensorFlow
Keras is a high-level library and can be used as a simplified interface to TensorFlow. That means Keras does not do any computations by itself; it is just a simple way to interact with TensorFlow, which is running in the background.
TensorFlow is a software library developed by Google and is very popular for deep learning. In this book, we usually try to work with TensorFlow only through Keras, since that is easier than working with TensorFlow directly. However, sometimes we might want to write a bit of TensorFlow code in order to build more advanced models.
The goal of TensorFlow is to run the computations needed for deep learning as quickly as possible. It does so, as the name gives away, by working with tensors in a data flow graph. Starting in version 1.7, Keras is now also a core part of TensorFlow.
So, we could import the Keras layers by running the following:
from tensorflow.keras.layers import Dense, Activation
This book will treat Keras as a standalone library. However, you might want to use a different backend for Keras one day, as it keeps the code cleaner if we have shorter import
statements.