Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Accelerate Model Training with PyTorch 2.X

You're reading from   Accelerate Model Training with PyTorch 2.X Build more accurate models by boosting the model training process

Arrow left icon
Product type Paperback
Published in Apr 2024
Publisher Packt
ISBN-13 9781805120100
Length 230 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Maicon Melo Alves Maicon Melo Alves
Author Profile Icon Maicon Melo Alves
Maicon Melo Alves
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Part 1: Paving the Way FREE CHAPTER
2. Chapter 1: Deconstructing the Training Process 3. Chapter 2: Training Models Faster 4. Part 2: Going Faster
5. Chapter 3: Compiling the Model 6. Chapter 4: Using Specialized Libraries 7. Chapter 5: Building an Efficient Data Pipeline 8. Chapter 6: Simplifying the Model 9. Chapter 7: Adopting Mixed Precision 10. Part 3: Going Distributed
11. Chapter 8: Distributed Training at a Glance 12. Chapter 9: Training with Multiple CPUs 13. Chapter 10: Training with Multiple GPUs 14. Chapter 11: Training with Multiple Machines 15. Index 16. Other Books You May Enjoy

Why distribute the training on multiple CPUs?

At first sight, thinking about distributing the training process among multiple CPUs in a single machine sounds slightly confusing. After all, we could increase the number of threads used in the training process to allocate all available CPUs (computing cores).

However, as said by Carlos Drummond de Andrade, a famous Brazilian poet, “In the middle of the road there was a stone. There was a stone in the middle of the road.” Let’s see what happens to the training process when we just increase the number of threads in a machine with multiple cores.

Why not increase the number of threads?

In Chapter 4, Using Specialized Libraries, we learned that PyTorch relies on OpenMP to accelerate the training process by employing the multithreading technique. OpenMP assigns threads to physical cores intending to improve the performance of the training process.

So, if we have a certain number of available computing cores...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image