Summary
In this chapter, you learned that adopting a mixed-precision approach can accelerate the training process of our models.
Although it is possible to implement the mixed precision strategy by hand, it is preferable to rely on the AMP solution provided by PyTorch since it is an elegant and seamless process that’s designed to avoid the occurrence of errors involving numeric representation. When this kind of error occurs, they are very hard to identify and solve.
Implementing AMP on PyTorch requires adding a few extra lines to the original code. Essentially, we must wrap the training loop with the AMP engine, enable four flags related to backend libraries, and instantiate a gradient scaler.
Depending on the GPU architecture, library version, and the model itself, we can significantly improve the performance of the training process.
This chapter closes the second part of this book. Next, in the third and last part, we will learn how to spread the training process...