Microsoft also introduced its open source deep learning framework not too long ago: Microsoft Cognitive Toolkit. This framework is better known as CNTK. CNTK is written in C++ for performance reasons and has a Python API. CNTK supports GPUs and multi-GPU usage.Â
Implementing high-performance models with CNTK
How to do it...
- First, we install CNTK with pip as follows:
pip install https://cntk.ai/PythonWheel/GPU/cntk-2.2-cp35-cp35m-linux_x86_64.whl
Adjust the wheel file if necessary (see https://docs.microsoft.com/en-us/cognitive-toolkit/Setup-Linux-Python?tabs=cntkpy22).Â
- After installing CNTK, we can import it into our Python environment:
import cntk
- Let's create some simple dummy data that we can use for training:
import numpy as np
x_input = np.array([[1,2,3,4,5]], np.float32)
y_input = np.array([[10]], np.float32)
- Next, we need to define the placeholders for the input data:
X = cntk.input_variable(5, np.float32)
y = cntk.input_variable(1, np.float32)
- With CNTK, it's straightforward to stack multiple layers. We stack a dense layer with 32 inputs on top of an output layer with 1 output:
from cntk.layers import Dense, Sequential
model = Sequential([Dense(32),
Dense(1)])(X)
- Next, we define the loss function:
loss = cntk.squared_error(model, y)
- Now, we are ready to finalize our model with an optimizer:
learning_rate = 0.001
trainer = cntk.Trainer(model, (loss), cntk.adagrad(model.parameters, learning_rate))
- Finally, we can train our model as follows:
for epoch in range(10):
trainer.train_minibatch({X: x_input, y: y_input})
As we have demonstrated in this introduction, it is straightforward to build models in CNTK with the appropriate high-level wrappers. However, just like TensorFlow and PyTorch, you can choose to implement your model on a more granular level, which gives you a lot of freedom.