Why distribute the training on multiple CPUs?
At first sight, thinking about distributing the training process among multiple CPUs in a single machine sounds slightly confusing. After all, we could increase the number of threads used in the training process to allocate all available CPUs (computing cores).
However, as said by Carlos Drummond de Andrade, a famous Brazilian poet, “In the middle of the road there was a stone. There was a stone in the middle of the road.” Let’s see what happens to the training process when we just increase the number of threads in a machine with multiple cores.
Why not increase the number of threads?
In Chapter 4, Using Specialized Libraries, we learned that PyTorch relies on OpenMP to accelerate the training process by employing the multithreading technique. OpenMP assigns threads to physical cores intending to improve the performance of the training process.
So, if we have a certain number of available computing cores...