Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learn Amazon SageMaker

You're reading from   Learn Amazon SageMaker A guide to building, training, and deploying machine learning models for developers and data scientists

Arrow left icon
Product type Paperback
Published in Aug 2020
Publisher Packt
ISBN-13 9781800208919
Length 490 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Julien Simon Julien Simon
Author Profile Icon Julien Simon
Julien Simon
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Section 1: Introduction to Amazon SageMaker
2. Chapter 1: Introduction to Amazon SageMaker FREE CHAPTER 3. Chapter 2: Handling Data Preparation Techniques 4. Section 2: Building and Training Models
5. Chapter 3: AutoML with Amazon SageMaker Autopilot 6. Chapter 4: Training Machine Learning Models 7. Chapter 5: Training Computer Vision Models 8. Chapter 6: Training Natural Language Processing Models 9. Chapter 7: Extending Machine Learning Services Using Built-In Frameworks 10. Chapter 8: Using Your Algorithms and Code 11. Section 3: Diving Deeper on Training
12. Chapter 9: Scaling Your Training Jobs 13. Chapter 10: Advanced Training Techniques 14. Section 4: Managing Models in Production
15. Chapter 11: Deploying Machine Learning Models 16. Chapter 12: Automating Machine Learning Workflows 17. Chapter 13: Optimizing Prediction Cost and Performance 18. Other Books You May Enjoy

Deploying a model with Amazon Elastic Inference

When deploying a model, you have to decide whether it should run on a CPU instance, or on a GPU instance. In some cases, there isn't much of a debate. For example, some algorithms simply don't benefit from GPU acceleration, so they should be deployed to CPU instances. At the other end of the spectrum, complex deep learning models for Computer Vision or Natural Language Processing run best on GPUs.

In many cases, the situation is not that clear-cut. First, you should know what the maximum predicted latency is for your application. If you're predicting click-through rate for a real-time ad tech application, every millisecond counts. If you're predicting customer churn in a back-office application, not so much.

In addition, even models that could benefit from GPU acceleration may not be large and complex enough to fully utilize the thousands of cores available on a modern GPU. In such scenarios, you're stuck...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image