Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Applied Machine Learning and High-Performance Computing on AWS

You're reading from   Applied Machine Learning and High-Performance Computing on AWS Accelerate the development of machine learning applications following architectural best practices

Arrow left icon
Product type Paperback
Published in Dec 2022
Publisher Packt
ISBN-13 9781803237015
Length 382 pages
Edition 1st Edition
Tools
Arrow right icon
Authors (4):
Arrow left icon
Trenton Potgieter Trenton Potgieter
Author Profile Icon Trenton Potgieter
Trenton Potgieter
Shreyas Subramanian Shreyas Subramanian
Author Profile Icon Shreyas Subramanian
Shreyas Subramanian
Farooq Sabir Farooq Sabir
Author Profile Icon Farooq Sabir
Farooq Sabir
Mani Khanuja Mani Khanuja
Author Profile Icon Mani Khanuja
Mani Khanuja
Arrow right icon
View More author details
Toc

Table of Contents (20) Chapters Close

Preface 1. Part 1: Introducing High-Performance Computing
2. Chapter 1: High-Performance Computing Fundamentals FREE CHAPTER 3. Chapter 2: Data Management and Transfer 4. Chapter 3: Compute and Networking 5. Chapter 4: Data Storage 6. Part 2: Applied Modeling
7. Chapter 5: Data Analysis 8. Chapter 6: Distributed Training of Machine Learning Models 9. Chapter 7: Deploying Machine Learning Models at Scale 10. Chapter 8: Optimizing and Managing Machine Learning Models for Edge Deployment 11. Chapter 9: Performance Optimization for Real-Time Inference 12. Chapter 10: Data Visualization 13. Part 3: Driving Innovation Across Industries
14. Chapter 11: Computational Fluid Dynamics 15. Chapter 12: Genomics 16. Chapter 13: Autonomous Vehicles 17. Chapter 14: Numerical Optimization 18. Index 19. Other Books You May Enjoy

Distributed Training of Machine Learning Models

When it comes to Machine Learning (ML) model training, the primary goal for a data scientist or ML practitioner is to train the optimal model based on the relevant data to address the business use case. While this goal is of primary importance, the panacea is to perform this task as quickly and effectively as possible. So, how do we speed up model training? Moreover, sometimes, the data or the model might be too big to fit into a single GPU memory. So how do we prevent out-of-memory (OOM) errors?

The simplest answer to this question is to basically throw more compute resources, in other words, more CPUs and GPUs, at the problem. This is essentially using larger compute hardware and is commonly referred to as a scale-up strategy. However, there is only a finite number of CPUs and GPUs that can be squeezed into a server. So, sometimes a scale-out strategy is required, whereby we add more servers into the mix, essentially distributing...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image