Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
AWS Certified Machine Learning - Specialty (MLS-C01) Certification Guide

You're reading from   AWS Certified Machine Learning - Specialty (MLS-C01) Certification Guide The ultimate guide to passing the MLS-C01 exam on your first attempt

Arrow left icon
Product type Paperback
Published in Feb 2024
Publisher Packt
ISBN-13 9781835082201
Length 342 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Somanath Nanda Somanath Nanda
Author Profile Icon Somanath Nanda
Somanath Nanda
Weslley Moura Weslley Moura
Author Profile Icon Weslley Moura
Weslley Moura
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Chapter 1: Machine Learning Fundamentals FREE CHAPTER 2. Chapter 2: AWS Services for Data Storage 3. Chapter 3: AWS Services for Data Migration and Processing 4. Chapter 4: Data Preparation and Transformation 5. Chapter 5: Data Understanding and Visualization 6. Chapter 6: Applying Machine Learning Algorithms 7. Chapter 7: Evaluating and Optimizing Models 8. Chapter 8: AWS Application Services for AI/ML 9. Chapter 9: Amazon SageMaker Modeling 10. Chapter 10: Model Deployment 11. Chapter 11: Accessing the Online Practice Resources 12. Other Books You May Enjoy

Scaling applications with SageMaker deployment and AWS Autoscaling

Autoscaling is a crucial aspect of deploying ML models in production environments, ensuring that applications can handle varying workloads efficiently. Amazon SageMaker, combined with AWS Auto Scaling, provides a robust solution for automatically adjusting resources based on demand. In this section, you will explore different scenarios where autoscaling is essential and how to achieve it, using SageMaker model deployment options and AWS Auto Scaling.

Scenario 1 – Fluctuating inference workloads

In a retail application, the number of users making product recommendation requests can vary throughout the day, with peak loads during specific hours.

Autoscaling solution

Implement autoscaling for SageMaker real-time endpoints to dynamically adjust the number of instances, based on the inference request rate.

Steps

  1. Configure the SageMaker endpoint to use autoscaling.
  2. Set up minimum and maximum...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image