Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Deep Learning with MXNet Cookbook
Deep Learning with MXNet Cookbook

Deep Learning with MXNet Cookbook: Discover an extensive collection of recipes for creating and implementing AI models on MXNet

eBook
€20.98 €29.99
Paperback
€37.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Deep Learning with MXNet Cookbook

Up and Running with MXNet

MXNet is one of the most used deep learning frameworks and is an Apache open source project. Before 2016, Amazon Web Services (AWS)’s research efforts did not use a preferred deep learning framework, allowing each team to research and develop according to their choices. Although some deep learning frameworks have thriving communities, sometimes AWS was not able to fix code bugs at the required speed (among other issues). To solve these issues, at the end of 2016, AWS announced MXNet as its deep learning framework of choice, investing in internal teams to develop it further. Research institutions that support MXNet are Intel, Baidu, Microsoft, Carnegie Mellon University, and MIT, among others. It was co-developed by Carlos Guestrin at Carnegie Mellon University and the University of Washington (along with GraphLab).

Some of its advantages are as follows:

  • Imperative/symbolic programming and hybridization (which will be covered in Chapters 1 and 9)
  • Support for multiple GPUs and distributed training (which will be covered in Chapters 7 and 8)
  • Highly optimized for inference production systems (which will be covered in Chapters 7 and 9)
  • A large number of pre-trained models on its Model Zoos in the fields of computer vision and natural language processing, among others (covered in Chapters 6, 7, and 8)

To start working with MXNet, we need to install the library. There are several different versions of MXNet available to be installed, and in this chapter, we will cover how to choose the right version. The most important parameter will be the available hardware we have. In order to optimize performance, it is always best to maximize the use of our available hardware. We will compare the usage of a well-known linear algebra library, NumPy, with similar operations in MXNet. We will then compare the performance of the different MXNet versions versus NumPy.

MXNet includes its own API for deep learning, Gluon, and moreover, Gluon provides different libraries for computer vision and natural language processing that include pre-trained models and utilities. These libraries are known as GluonCV and GluonNLP.

In this chapter, we will cover the following topics:

  • Installing MXNet, Gluon, GluonCV, and GluonNLP
  • NumPy and MXNet ND arrays – comparing their performance

Technical requirements

Apart from the technical requirements specified in the Preface, no other requirements apply to this chapter.

The code for this chapter can be found at the following GitHub URL: https://github.com/PacktPublishing/Deep-Learning-with-MXNet-Cookbook/tree/main/ch01.

Furthermore, you can access directly each recipe from Google Colab – for example, use the following for the first recipe of this chapter: https://colab.research.google.com/github/PacktPublishing/Deep-Learning-with-MXNet-Cookbook/blob/main/ch01/1_1_Installing_MXNet.ipynb.

Installing MXNet, Gluon, GluonCV, and GluonNLP

In order to get the maximum performance out of the available software (programming languages) and hardware (CPU and GPU), there are different MXNet library versions available to install. We shall learn how to install them in this recipe.

Getting ready

Before getting started with the MXNet installation, let us review the different versions available of the software packages that we will use, including MXNet. The reason we do that is that our hardware configuration must map to the chosen versions of our software packages in order to maximize performance:

  • Python: MXNet is available for different programming languages – Python, Java, R, and C++, among others. We will use MXNet for Python, and Python 3.7+ is recommended.
  • Jupyter: Jupyter is an open source web application that provides an easy-to-use interface to show Markdown text, working code, and data visualizations. It is very useful for understanding deep learning, as we can describe concepts, write the code to run through those concepts, and visualize the results (typically comparing them with the input data). Jupyter Core 4.5+ is recommended.
  • CPUs and GPUs: MXNet can work with any hardware configuration – that is, any single CPU can run MXNet. However, there are several hardware components that MXNet can leverage to improve performance:
    • Intel CPUs: Intel developed a library known as Math Kernel Library (MKL) for optimized math operations. MXNet has support for this library, and using the optimized version can improve certain operations. Any modern version of Intel MKL is sufficient.
    • NVIDIA GPUs: NVIDIA developed a library known as Compute Unified Device Architecture (CUDA) for optimized parallel operations (such as matrix operations, which are very common in deep learning). MXNet has support for this library, and using the optimized version can dramatically improve large deep learning workloads, such as model training. CUDA 11.0+ is recommended.
  • MXNet version: At the time of writing, MXNet 1.9.1 is the most up-to-date stable version that has been released. All the code throughout the book has been verified with this version. MXNet, and deep learning in general, can be considered a live ongoing project, and therefore, new versions will be released periodically. These new versions will have improved functionality and new features, but they might also contain breaking changes from previous APIs. If you are revisiting this book in a few months and a new version has been released with breaking changes, how to install MXNet version 1.8.0 specifically is also described here.

Tip

I have used Google Colab as the platform to run the code described in this book. At the time of writing, it provides Python 3.10.12, up-to-date Jupyter libraries, Intel CPUs (Xeon @ 2.3 GHz), and NVIDIA GPUs (which can vary: K80s, T4s, P4s, and P100s) with CUDA 11.8 pre-installed. Therefore, minimal steps are required to install MXNet and get it running.

How to do it...

Throughout the book, we will not only use code extensively but also clarify comments and headings in that code to provide structure, as well as several types of visual information such as images or generated graphs. For these reasons, we will use Jupyter as the supporting development environment. Moreover, in order to facilitate setup, installation, and experimentation, we will use Google Colab.

Google Colab is a hosted Jupyter Notebook service that requires no setup to use, while providing free access to computing resources, including GPUs. In order to set up Google Colab properly, this section is divided into two main points:

  • Setting up the notebook
  • Verifying and installing libraries

Important note

If you prefer, you can use any local environment that supports Python 3.7+, such as Anaconda, or any other Python distribution. This is highly encouraged if your hardware specifications are better than Google Colab’s offering, as better hardware will reduce computation time.

Setting up the notebook

In this section, we will learn how to work with Google Colab and set up a new notebook, which we will use to verify our MXNet installation:

  1. Open your favorite web browser. In my case, I have used Google Chrome as the web browser throughout the book. Visit https://colab.research.google.com/ and click on NEW NOTEBOOK.
Figure 1.1 – The Google Colab start screen

Figure 1.1 – The Google Colab start screen

  1. Change the title of the notebook – for example, as you can see in the following screenshot, I have changed the title to DL with MXNet Cookbook 1.1 Installing MXNet.
Figure 1.2 – A Google Colab notebook

Figure 1.2 – A Google Colab notebook

  1. Change your Google Colab runtime type to use a GPU:
    1. Select Change runtime type from the Runtime menu.
Figure 1.3 – Change runtime type

Figure 1.3 – Change runtime type

  1. In Notebook settings, select GPU as the Hardware accelerator option.
Figure 1.4 – Hardware accelerator | GPU

Figure 1.4 – Hardware accelerator | GPU

Verifying and installing libraries

In this section, go to the first cell (make sure it is a code cell) and type the following commands:

  1. Verify the Python version by typing the following:
    import platform
    platform.python_version()

    This will yield an output as follows:

    3.7.10

    Check the version, and make sure that it is 3.7+.

Important note

In Google Colab, you can directly run commands as if you were in the Linux Terminal by adding the ! character to the command. Feel free to try other commands such as !ls.

  1. We now need to verify the Jupyter version (Jupyter Core 4.5.0 or above will suffice):
    !jupyter --version

    This is one potential output from the previous command:

    jupyter core : 4.5.0
    jupyter-notebook : 5.2.2
    qtconsole : 4.5.2
    ipython : 5.5.0
    ipykernel : 4.10.1
    jupyter client : 5.3.1
    jupyter lab : not installed
    nbconvert : 5.5.0
    ipywidgets : 7.5.0
    nbformat : 4.4.0
    traitlets : 4.3.2

Tip

Jupyter, an open source notebook application, is assumed to be installed, as is the case for Google Colab. For further instructions on how to install it, visit https://jupyter.org/install.

  1. Verify whether an Intel CPU is present in the hardware:
    !lscpu | grep 'Model name'

    This will yield a similar output to the following:

    Model name: Intel(R) Xeon(R) CPU @ 2.20GHz

    The more up to date the processor the better, but for the purposes of this book, the dependency is larger with the GPU than with the CPU.

  2. Verify the NVIDIA GPU is present in the hardware (there are devices listed below) and that NVIDIA CUDA is installed:
    !nvidia-smi

    This will yield a similar output to the following:

    +-----------------------------------------------------------------+
    | NVIDIA-SMI 460.67 Driver Version: 460.32.03 CUDA Version: 11.2  |
    |---------------------------+--------------+----------------------+
    |GPU Name      Persistence-M|Bus-Id  Disp.A| Volatile Uncorr. ECC |
    |Fan Temp Perf Pwr:Usage/Cap|  Memory-Usage|  GPU-Util Compute M. |
    |                           |              |               MIG M. | |===========================+==============+======================|
    |   0 Tesla T4          Off |0:00:04.0 Off |                    0 |
    | N/A  37C  P8     9W / 70W |0MiB/15109MiB |      0%      Default |
                    |              |                  N/A |
    +---------------------------+--------------+----------------------+
    +-----------------------------------------------------------------+
    | Processes:                                                      |
    |  GPU    GI  CI      PID  Type  Process  name         GPU Memory |
          ID  ID                                       Usage      | |=================================================================|
    | No running processes found                                      |
    +-----------------------------------------------------------------+

Important note

CUDA 11.0 has known issues with the NVIDIA K80. If you have an NVIDIA K80 and are having issues with the examples described, uninstall CUDA 11.0 and install CUDA 10.2. Afterward, install MXNet for CUDA 10.2 following the steps described here.

  1. Verify that the CUDA version is 11.0 or above:
    !nvcc --version

    This will yield a similar output to the following:

    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2020 NVIDIA Corporation
    Built on Wed_Jul_22_19:09:09_PDT_2020
    Cuda compilation tools, release 11.0, V11.0.221
    Build cuda_11.0_bu.TC445_37.28845127_0
  2. Install MXNet, depending on your hardware configuration. The following are the different MXNet versions that you can install:
    • Recommended/Google Colab: The latest MXNet version (1.9.1) with GPU support:
      !python3 -m pip install mxnet-cu117
    • No Intel CPU nor NVIDIA GPU: Install MXNet with the following command:
      !python3 -m pip install mxnet
    • Intel CPU without NVIDIA GPU: Install MXNet with Intel MKL, with the following command:
      !python3 -m pip install mxnet-mkl
    • No Intel CPU with NVIDIA GPU: Install MXNet with NVIDIA CUDA 10.2, with the following command:
      !python3 -m pip install mxnet-cu102
    • Intel CPU and NVIDIA GPU: Install MXNet with Intel MKL and NVIDIA CUDA 11.0, with the following command:
      !python3 -m pip install mxnet-cu110

Tip

pip3, a Python 3 package manager, is assumed to be installed, as is the case for Google Colab. If a different installation method for MXNet is preferred, visit https://mxnet.apache.org/versions/master/get_started for instructions.

After version 1.6.0, MXNet is released by default with the Intel MKL library extension; therefore, there is no need to add the mkl suffix anymore when installing the most recent versions, as seen previously in the recommended installation.

  1. Verify that the MXNet installation has been successful with the following two steps:
    1. The following commands must not return any error and must successfully display MXNet version 1.9.1:
    import mxnet
    mxnet.__version__
    1. The list of features that appear in the following contain the CUDA, CUDNN, and MKLDNN features:
    features = mxnet.runtime.Features()
    print(features)
     print(features.is_enabled('CUDA'))
     print(features.is_enabled('CUDNN'))
     print(features.is_enabled('MKLDNN'))

    The output will list all the features and True for each one.

  2. Install GluonCV and GluonNLP:
    !python3 -m pip install gluoncv gluonnlp

This command will install the latest versions of GluonCV and GluonNLP, which at the time of writing were, respectively, 0.10 and 0.10.

How it works...

The training, inference, and evaluation of deep learning networks are highly complex operations, involving hardware and several layers of software, including drivers, low-level performance libraries such as MKL and CUDA, and high-level programming languages and libraries such as Python and MXNet.

Important note

MXNet is an actively developed project, part of the Apache Incubator program. Therefore, new versions are expected to be released, and they might contain breaking changes. The preceding command will install the latest stable version available. Throughout this book, the version of MXNet used is 1.9.1. If your code fails and it uses a different MXNet version, try installing MXNet version 1.9.1 by running the following:

!python3 -m pip install mxnet-cu117==1.9.1

By checking all the hardware and software components, we can install the most optimized version of MXNet. We can use Google Colab, which easily transfers to other local configurations such as the Anaconda distribution.

Moreover, we can identify the right combination of CUDA drivers and MXNet versions that will maximize performance and verify a successful installation.

There’s more…

It is highly recommended to always use the latest versions of all the software components discussed. Deep learning is an evolving field and there are always improvements such as new functionalities added, changes in the APIs, and updates in the internal functions to increase performance, among other changes.

However, it is very important that all components (CPU, GPU, CUDA, and the MXNet version) are compatible. To match these components, it is highly recommended to visit https://mxnet.apache.org/versions/master/get_started and check for the latest CUDA and MXNet versions you can install to maximize your hardware performance.

As an example, for a Python 3-based Linux distribution, installed using pip3, these are the MXNet versions available (note with/without CPU acceleration and/or with GPU acceleration).

If you are interested in knowing more about Intel’s MKL, the following link is a very good starting point: https://software.intel.com/content/www/us/en/develop/articles/getting-started-with-intel-optimization-for-mxnet.html.

NumPy and MXNet ND arrays

If you have worked with data previously in Python, chances are you have found yourself working with NumPy and its N-dimensional arrays (ND arrays). These are also known as tensors, and the 0D variants are called scalars, the 1D variants are called vectors, and the 2D variants are called matrixes.

MXNet provides its own ND array type, and there are two different ways to work with them. On one hand, there is the nd module, MXNet’s native and optimized way to work with MXNet ND arrays. On the other hand, there is the np module, which has the same interfaces and syntax as the NumPy ND array type and has also been optimized, but it’s limited due to the interface constraints. With MXNet ND arrays, we can leverage its underlying engine, with compute optimizations such as Intel MKL and/or NVIDIA CUDA, if our hardware configuration is compatible. This means we will be able to use almost the same syntax as when working with NumPy, but accelerated with the MXNet engine and our GPUs, not supported by NumPy.

Moreover, as we will see in the next chapters, a very common operation that we will execute on MXNet is automatic differentiation on these ND arrays. By using MXNet ND array libraries, this operation will also leverage our hardware for optimum performance. NumPy does not provide automatic differentiation out of the box.

Getting ready

If you have already installed MXNet, as described in the previous recipe, in terms of executing accelerated code, the only remaining steps before using MXNet ND arrays is importing their libraries:

import numpy as np
import mxnet as mx

However, it is worth noting here an important underlying difference between NumPy ND array operations and MXNet ND array operations. NumPy follows an eager evaluation strategy – that is, all operations are evaluated at the moment of execution. Conversely, MXNet uses a lazy evaluation strategy, more optimal for large compute loads, where the actual calculation is deferred until the values are actually needed.

Therefore, when comparing performances, we will need to force MXNet to finalize all calculations before computing the time needed for them. As we will see in the examples, this is achieved by calling the wait_to_read() function, Furthermore, when accessing the data with functions such as print() or .asnumpy(), execution is then completed before calling these functions, yielding the wrong impression that these functions are actually time-consuming:

  1. Let’s check a specific example and start by running it on the CPU:
    import time
    x_mx_cpu = mx.np.random.rand(1000, 1000, ctx = mx.cpu())
    start_time = time.time()
    mx.np.dot(x_mx_cpu, x_mx_cpu).wait_to_read()
    print("Time of the operation: ", time.time() - start_time)

    This will yield a similar output to the following:

    Time of the operation: 0.04673886299133301
  2. However, let’s see what happens if we measure the time without the call to wait_to_read():
    x_mx_cpu = mx.np.random.rand(1000, 1000, ctx = mx.cpu())
    start_time = time.time()
    x_2 = mx.np.dot(x_mx_cpu, x_mx_cpu)
     print("(FAKE, MXNet has lazy evaluation)")
     print("Time of the operation : ", time.time() - start_time)
     start_time = time.time()
    print(x_2)
     print("(FAKE, MXNet has lazy evaluation)")
     print("Time to display: ", time.time() - start_time)

    The following will be the output:

    (FAKE, MXNet has lazy evaluation)
     Time of the operation : 0.00118255615234375
     [[256.59583 249.70404 249.48639 ... 251.97151 255.06744 255.60669]
     [255.22629 251.69475 245.7591 ... 252.78784 253.18878 247.78052]
     [257.54187 254.29262 251.76346 ... 261.0468 268.49127 258.2312 ]
     ...
     [256.9957 253.9823 249.59073 ... 256.7088 261.14255 253.37457]
     [255.94278 248.73282 248.16641 ... 254.39209 252.4108 249.02774]
     [253.3464 254.55524 250.00716 ... 253.15712 258.53894 255.18658]]
     (FAKE, MXNet has lazy evaluation)
     Time to display: 0.042133331298828125

As we can see, the first experiment indicated that the computation took ~50 ms to complete; however, the second experiment indicated that the computation took ~1 ms (50 times less!), and the visualization was more than 40 ms. This is an incorrect result. This is because we measured our performance incorrectly in the second experiment. Refer to the first experiment and the call to wait_to_read() for a proper performance measurement.

How to do it...

In this section, we will compare performance in terms of computation time for two compute-intensive operations:

  • Matrix creation
  • Matrix multiplication

We will compare five different compute profiles for each operation:

  • Using the NumPy library (no CPU or GPU acceleration)
  • Using the MXNet np module with CPU acceleration but no GPU
  • Using the MXNet np module with CPU acceleration and GPU acceleration
  • Using the MXNet nd module with CPU acceleration but no GPU
  • Using the MXNet nd module with CPU acceleration and GPU acceleration

To finalize, we will plot the results and draw some conclusions.

Timing data structures

We will store the computation time in five dictionaries, one for each compute profile (timings_np, timings_mx_cpu, and timings_mx_gpu). The initialization of the data structures is as follows:

timings_np = {}
timings_mx_np_cpu = {}
timings_mx_np_gpu = {}
timings_mx_nd_cpu = {}
timings_mx_nd_gpu = {}

We will run each operation (matrix generation and matrix multiplication) with matrixes in a different order, namely the following:

matrix_orders = [1, 5, 10, 50, 100, 500, 1000, 5000, 10000]

Matrix creation

We define three functions to generate matrixes; the first function will use the NumPy library to generate a matrix, and it will receive as an input parameter the matrix order. The second function will use the MXNet np module, and the third function will use the MXNet and module. For the second and third functions, as input parameters we will provide the context where the matrix needs to be created, apart from the matrix order. This context specifies whether the result (the created matrix in this case) must be computed in the CPU or the GPU (and which GPU if there are multiple devices available):

def create_matrix_np(n):
    """
    Given n, creates a squared n x n matrix,
    with each matrix value taken from a random
    uniform distribution between [0, 1].
    Returns the created matrix a.
    Uses NumPy.
    """
    a = np.random.rand(n, n)
    return a
def create_matrix_mx(n, ctx=mx.cpu()):
    """
    Given n, creates a squared n x n matrix,
    with each matrix value taken from a random
    uniform distribution between [0, 1].
    Returns the created matrix a.
    Uses MXNet NumPy syntax and context ctx
    """
    a = mx.np.random.rand(n, n, ctx=ctx)
    a.wait_to_read()
    return a
def create_matrix_mx_nd(n, ctx=mx.cpu()):
    """
    Given n, creates a squared n x n matrix,
    with each matrix value taken from a random
    uniform distribution between [0, 1].
    Returns the created matrix a.
    Uses MXNet ND native syntax and context ctx
    """
    a = mx.nd.random.uniform(shape=(n, n), ctx=ctx)
    a.wait_to_read()
    return a

To store necessary data for our performance comparison later, we use the structures created previously, with the following code:

timings_np["create"] = []
for n in matrix_orders:
    result = %timeit -o create_matrix_np(n)
    timings_np["create"].append(result.best)
timings_mx_np_cpu["create"] = []
for n in matrix_orders:
    result = %timeit -o create_matrix_mx_np(n)
    timings_mx_np_cpu["create"].append(result.best)
timings_mx_np_gpu["create"] = []
ctx = mx.gpu()
for n in matrix_orders:
    result = %timeit -o create_matrix_mx_np(n, ctx)
    timings_mx_np_gpu["create"].append(result.best)
timings_mx_nd_cpu["create"] = []
for n in matrix_orders:
    result = %timeit -o create_matrix_mx_nd(n)
    timings_mx_nd_cpu["create"].append(result.best)
timings_mx_nd_gpu["create"] = []
ctx = mx.gpu()
for n in matrix_orders:
    result = %timeit -o create_matrix_mx_nd(n, ctx)
    timings_mx_nd_gpu["create"].append(result.best)

Matrix multiplication

We define three functions to compute the matrixes multiplication; the first function will use the NumPy library and will receive as input parameters the matrixes to multiply. The second function will use the MXNet np module, and the third function will use the MXNet nd module. For the second and third functions, the same parameters are used. The context where the multiplication will happen is given by the context where the matrixes were created; no parameter needs to be added. Both matrixes need to have been created in the same context, or an error will be triggered:

def multiply_matrix_np(a, b):
    """
    Multiplies 2 squared matrixes a and b
    and returns the result c.
    Uses NumPy.
    """
    #c = np.matmul(a, b)
    c = np.dot(a, b)
    return c
def multiply_matrix_mx_np(a, b):
    """
    Multiplies 2 squared matrixes a and b
    and returns the result c.
    Uses MXNet NumPy syntax.
    """
    c = mx.np.dot(a, b)
    c.wait_to_read()
    return c
def multiply_matrix_mx_nd(a, b):
    """
    Multiplies 2 squared matrixes a and b
    and returns the result c.
    Uses MXNet ND native syntax.
    """
    c = mx.nd.dot(a, b)
    c.wait_to_read()
    return c

To store the necessary data for our performance comparison later, we will use the structures created previously, with the following code:

timings_np["multiply"] = []
for n in matrix_orders:
    a = create_matrix_np(n)
    b = create_matrix_np(n)
    result = %timeit -o multiply_matrix_np(a, b)
    timings_np["multiply"].append(result.best)
timings_mx_np_cpu["multiply"] = []
for n in matrix_orders:
    a = create_matrix_mx_np(n)
    b = create_matrix_mx_np(n)
    result = %timeit -o multiply_matrix_mx_np(a, b)
    timings_mx_np_cpu["multiply"].append(result.best)
timings_mx_np_gpu["multiply"] = []
ctx = mx.gpu()
for n in matrix_orders:
    a = create_matrix_mx_np(n, ctx)
    b = create_matrix_mx_np(n, ctx)
    result = %timeit -o multiply_matrix_mx_np(a, b)
    timings_mx_gpu["multiply"].append(result.best)
timings_mx_nd_cpu["multiply"] = []
for n in matrix_orders:
    a = create_matrix_mx_nd(n)
    b = create_matrix_mx_nd(n)
    result = %timeit -o multiply_matrix_mx_nd(a, b)
    timings_mx_nd_cpu["multiply"].append(result.best)
timings_mx_nd_gpu["multiply"] = []
ctx = mx.gpu()
for n in matrix_orders:
    a = create_matrix_mx_nd(n, ctx)
    b = create_matrix_mx_nd(n, ctx)
    result = %timeit -o multiply_matrix_mx_nd(a, b)
    timings_mx_nd_gpu["multiply"].append(result.best)

Drawing conclusions

The first step before making any assessments is to plot the data we have captured in the previous steps. For this step, we will use the pyplot module from a library called Matplotlib, which will allow us to create charts easily. The following code plots the runtime (in seconds) for the matrix generation and all the matrix orders computed:

import matplotlib.pyplot as plt
fig = plt.figure()
plt.plot(matrix_orders, timings_np["create"], color='red', marker='s')
plt.plot(matrix_orders, timings_mx_np_cpu["create"], color='blue', marker='o')
plt.plot(matrix_orders, timings_mx_np_gpu["create"], color='green', marker='^')
plt.plot(matrix_orders, timings_mx_nd_cpu["create"], color='yellow', marker='p')
plt.plot(matrix_orders, timings_mx_nd_gpu["create"], color='orange', marker='*')
plt.title("Matrix Creation Runtime", fontsize=14)
plt.xlabel("Matrix Order", fontsize=14)
plt.ylabel("Runtime (s)", fontsize=14)
plt.grid(True)
ax = fig.gca()
ax.set_xscale("log")
ax.set_yscale("log")
plt.legend(["NumPy", "MXNet NumPy (CPU)", "MXNet NumPy (GPU)", "MXNet ND (CPU)", "MXNet ND (GPU)"])
plt.show()

Quite similarly as shown in the previous code block, the following code plots the runtime (in seconds) for the matrix multiplication and all the matrix orders computed:

import matplotlib.pyplot as plt
fig = plt.figure()
plt.plot(matrix_orders, timings_np["multiply"], color='red', marker='s')
 plt.plot(matrix_orders, timings_mx_np_cpu["multiply"], color='blue', marker='o')
 plt.plot(matrix_orders, timings_mx_np_gpu["multiply"], color='green', marker='^')
 plt.plot(matrix_orders, timings_mx_nd_cpu["multiply"], color='yellow', marker='p')
 plt.plot(matrix_orders, timings_mx_nd_gpu["multiply"], color='orange', marker='*')
 plt.title("Matrix Multiplication Runtime", fontsize=14)
 plt.xlabel("Matrix Order", fontsize=14)
 plt.ylabel("Runtime (s)", fontsize=14)
 plt.grid(True)
 ax = fig.gca()
ax.set_xscale("log")
ax.set_yscale("log")
plt.legend(["NumPy", "MXNet NumPy (CPU)", "MXNet NumPy (GPU)", "MXNet ND (CPU)", "MXNet ND (GPU)"])
 plt.show()

These are the plots displayed (the results will vary according to the hardware configuration):

Figure 1.5 – Runtimes – a) Matrix creation, and b) Matrix multiplication

Figure 1.5 – Runtimes – a) Matrix creation, and b) Matrix multiplication

Important note

Note that the charts use a logarithmic scale for both axes, horizontal and vertical (the differences are larger than they seem). Furthermore, the actual values depend on the hardware architecture that the computations are run on; therefore, your specific results will vary.

There are several conclusions that can be drawn, both from each individual operation and collectively:

  • For smaller matrix orders, using NumPy is much faster in both operations. This is because MXNet works in a different memory space, and the amount of time to move the data to this memory space is longer than the actual compute time.
  • In matrix creation, for larger matrix orders, the difference between NumPy (remember, it’s CPU only) and MXNet with the np module and CPU acceleration is negligible, but with the nd module and CPU, acceleration is ~2x faster. For matrix multiplication, and depending on your hardware, MXNet with CPU acceleration can be ~2x faster (regardless of the module). This is because MXNet uses Intel MKL to optimize CPU computations.
  • In the ranges that are interesting for deep learning – that is, large computational loads involving matrix orders > 1,000 (which can represent data such as images composed of several megapixels or large language dictionaries), GPUs deliver typical gains of several orders of magnitude (~200x for creation, and ~40x for multiplication, exponentially growing with every increase of matrix order). This is by far the most compelling reason to work with GPUs when running deep learning experiments.
  • When using the GPU, the MXNet np module is faster than the MXNet nd module in creation (~7x), but the difference is negligible in multiplication. Typically, deep learning algorithms are more similar to multiplications to terms of computational loads, and therefore, a priori, there is no significant advantage in using the np module or the nd module. However, MXNet recommends using the native MXNet nd module (and the author subscribes to this recommendation) because some operations on the np module are not supported by autograd (MXNet’s auto-differentiation module). We will see in the upcoming chapters, when we train neural networks, how the autograd module is used and why it is critical.

How it works...

MXNet provides two optimized modules to work with ND arrays, including one that is an in-place substitute for NumPy. The advantages of operating with MXNet ND arrays are twofold:

  • MXNet ND array operations support automatic differentiation. As we will see in the following chapters, automatic differentiation is a key feature that allows developers to concentrate on the forward pass of the models, letting the backward pass be automatically derived.
  • Conversely, operations with MXNet ND arrays are optimized for the underlying hardware, yielding impressive results with GPU acceleration. We computed results for matrix creation and matrix multiplication to validate this conclusion experimentally.

There’s more…

In this recipe, we have barely scratched the surface of MXNet operations with ND arrays. If you want to read more about MXNet and ND arrays, this is the link to the official MXNet API reference: https://mxnet.apache.org/versions/1.0.0/api/python/ndarray/ndarray.html.

Furthermore, a very interesting tutorial can be found in the official MXNet documentation: https://gluon.mxnet.io/chapter01_crashcourse/ndarray.html.

Moreover, we have taken a glimpse at how to measure performance on MXNet. We will revisit this topic in the following chapters; however, a good deep-dive into the topic is given in the official MXNet documentation: https://mxnet.apache.org/versions/1.8.0/api/python/docs/tutorials/performance/backend/profiler.html.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Create scalable deep learning applications using MXNet products with step-by-step tutorials
  • Implement tasks such as transfer learning, transformers, and more with the required speed and scalability
  • Analyze model performance and fine-tune for accuracy, scalability, and speed
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

Explore the capabilities of the open-source deep learning framework MXNet to train and deploy neural network models and implement state-of-the-art (SOTA) architectures in Computer Vision, natural language processing, and more. The Deep Learning with MXNet Cookbook is your gateway to constructing fast and scalable deep learning solutions using Apache MXNet. Starting with the different versions of MXNet, this book helps you choose the optimal version for your use and install your library. You’ll work with MXNet/Gluon libraries to solve classification and regression problems and gain insights into their inner workings. Venturing further, you’ll use MXNet to analyze toy datasets in the areas of numerical regression, data classification, picture classification, and text classification. From building and training deep-learning neural network architectures from scratch to delving into advanced concepts such as transfer learning, this book covers it all. You'll master the construction and deployment of neural network architectures, including CNN, RNN, LSTMs, and Transformers, and integrate these models into your applications. By the end of this deep learning book, you’ll wield the MXNet and Gluon libraries to expertly create and train deep learning networks using GPUs and deploy them in different environments.

Who is this book for?

This book is for data scientists, machine learning engineers, and developers who want to work with Apache MXNet for building fast and scalable deep learning solutions. Python programming knowledge and access to a working coding environment with Python 3.6+ is necessary to get started. Although not a prerequisite, a solid theoretical understanding of mathematics for deep learning will be beneficial.

What you will learn

  • Grasp the advantages of MXNet and Gluon libraries
  • Build and train network models from scratch using MXNet
  • Apply transfer learning for more complex, fine-tuned network architectures
  • Address modern Computer Vision and NLP problems using neural network techniques
  • Train state-of-the-art models with GPUs and leverage modern optimization techniques
  • Improve inference run-times and deploy models in production

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Dec 29, 2023
Length: 370 pages
Edition : 1st
Language : English
ISBN-13 : 9781800562905
Category :
Languages :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Dec 29, 2023
Length: 370 pages
Edition : 1st
Language : English
ISBN-13 : 9781800562905
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 115.97
The Deep Learning Architect's Handbook
€39.99
Deep Learning with MXNet Cookbook
€37.99
Python Deep Learning
€37.99
Total 115.97 Stars icon
Banner background image

Table of Contents

11 Chapters
Chapter 1: Up and Running with MXNet Chevron down icon Chevron up icon
Chapter 2: Working with MXNet and Visualizing Datasets – Gluon and DataLoader Chevron down icon Chevron up icon
Chapter 3: Solving Regression Problems Chevron down icon Chevron up icon
Chapter 4: Solving Classification Problems Chevron down icon Chevron up icon
Chapter 5: Analyzing Images with Computer Vision Chevron down icon Chevron up icon
Chapter 6: Understanding Text with Natural Language Processing Chevron down icon Chevron up icon
Chapter 7: Optimizing Models with Transfer Learning and Fine-Tuning Chevron down icon Chevron up icon
Chapter 8: Improving Training Performance with MXNet Chevron down icon Chevron up icon
Chapter 9: Improving Inference Performance with MXNet Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(1 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Carlos Feb 23, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I recently started this book and I can tell you it is great to learn about Computer Vision and NLP, with tons of code samples. I love "learn-by-building" books, instead of pure-theory ones. This one is a perfect balance.By the way, all these technologies and frameworks are similar among them, so this book is perfect to start on this AI field, as MXNet will not have any further updates. For me, this means a very stable environment, without constantly dealing with broken libs!
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.