Implementing distributed training on multiple GPUs
In this section, we’ll show you how to implement and run distributed training on multiple GPUs using NCCL, the de facto communication backend for NVIDIA GPUs. We’ll start by providing a brief overview of NCCL, after which we will learn how to code and launch distributed training in a multi-GPU environment.
The NCCL communication backend
NCCL stands for NVIDIA Collective Communications Library. As its name suggests, NCCL is a library that provides optimized collective operations for NVIDIA GPUs. Therefore, we can use NCCL to execute collective routines such as broadcast, reduce, and the so-called all-reduce operation. Roughly speaking, NCCL plays the same role as oneCCL does for Intel CPUs.
PyTorch supports NCCL natively, which means that the default installation of PyTorch for NVIDIA GPUs already comes with a built-in NCCL version. NCCL works on single or multiple machines and supports the usage of high-performance...