Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
AWS Certified DevOps Engineer - Professional Certification and Beyond

You're reading from   AWS Certified DevOps Engineer - Professional Certification and Beyond Pass the DOP-C01 exam and prepare for the real world using case studies and real-life examples

Arrow left icon
Product type Paperback
Published in Nov 2021
Publisher Packt
ISBN-13 9781801074452
Length 638 pages
Edition 1st Edition
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Adam Book Adam Book
Author Profile Icon Adam Book
Adam Book
Arrow right icon
View More author details
Toc

Table of Contents (31) Chapters Close

Preface 1. Section 1: Establishing the Fundamentals
2. Chapter 1: Amazon Web Service Pillars FREE CHAPTER 3. Chapter 2: Fundamental AWS Services 4. Chapter 3: Identity and Access Management and Working with Secrets in AWS 5. Chapter 4: Amazon S3 Blob Storage 6. Chapter 5: Amazon DynamoDB 7. Section 2: Developing, Deploying, and Using Infrastructure as Code
8. Chapter 6: Understanding CI/CD and the SDLC 9. Chapter 7: Using CloudFormation Templates to Deploy Workloads 10. Chapter 8: Creating Workloads with CodeCommit and CodeBuild 11. Chapter 9: Deploying Workloads with CodeDeploy and CodePipeline 12. Chapter 10: Using AWS Opsworks to Manage and Deploy your Application Stack 13. Chapter 11: Using Elastic Beanstalk to Deploy your Application 14. Chapter 12: Lambda Deployments and Versioning 15. Chapter 13: Blue Green Deployments 16. Section 3: Monitoring and Logging Your Environment and Workloads
17. Chapter 14: CloudWatch and X-Ray's Role in DevOps 18. Chapter 15: CloudWatch Metrics and Amazon EventBridge 19. Chapter 16: Various Logs Generated (VPC Flow Logs, Load Balancer Logs, CloudTrail Logs) 20. Chapter 17: Advanced and Enterprise Logging Scenarios 21. Section 4: Enabling Highly Available Workloads, Fault Tolerance, and Implementing Standards and Policies
22. Chapter 18: Autoscaling and Lifecycle Hooks 23. Chapter 19: Protecting Data in Flight and at Rest 24. Chapter 20: Enforcing Standards and Compliance with System Manger's Role and AWS Config 25. Chapter 21: Using Amazon Inspector to Check your Environment 26. Chapter 22: Other Policy and Standards Services to Know 27. Section 5: Exam Tips and Tricks
28. Chapter 23: Overview of the DevOps Professional Certification Test 29. Chapter 24: Practice Exam 1 30. Other Books You May Enjoy

Cost optimization

Many have the misconception that moving to the cloud will become an instant cost saver for their company or organization. A stark reality is faced once more, and more teams find out that provisioning new resources is as easy as clicking a button. Once the bills start appearing from an environment that doesn't have strict guardrails or the ability to chargeback the workloads to the corresponding teams, most of the time, there comes a cost reduction movement from the top down.

As you look to optimize your costs, understand that cloud services that have been proven to be managed services come at a high cost per minute; however, the human resources cost is much lower. There is no need to care and feed the underlying servers, nor worry about updating the underlying operating systems. Many of the services allow you to scale to user demands, and this is taken care of for you automatically.

The ability to monitor cost and usage is also a key element in a cost optimization strategy. Having a sound strategy for resource tagging allows those who are responsible for financially managing the AWS account to perform chargebacks for the correct department.

There are five design principles for cost optimization in the cloud:

  • Implementing cloud financial management
  • Adopting a consumption model
  • Measuring overall efficiency
  • Stop spending money on undifferentiated heavy lifting
  • Analyzing and attributing expenditure

Implementing cloud financial management

Cloud financial management is something that is starting to grow across organizations, both big and small, rapidly. It takes a dedicated team (or a group of team members that has been partially allocated this responsibility) to build out the ability to see where the cloud spend is going. This part of the organization will be looking at the cost usage reports, setting the budget alarms, tracking the spend, and hopefully enforcing a costing tag that can show the chargebacks for each department, cost center, or project.

What is a chargeback?

An IT chargeback is a process that allows units to associate costs with specific departments, offices, or projects to try and accurately track the spend of IT. We are specifically referring to cloud spend in this example, but IT chargebacks are used in other areas of IT as well, such as for accounting purposes.

Adopting a consumption model

Using the cloud doesn't require a sophisticated forecasting model to keep costs under control, especially when you have multiple environments. Development and testing environments should have the ability to spin down or be suspended when not in use, hence saving on charges when they would otherwise be sitting idle. This is the beauty of on-demand pricing in the cloud. If developers gripe over the loss of data, then educate them on the use of snapshotting database instances before shutting them down; then, they can start their development from where they left off.

An even better strategy is to automate the process of shutting down the development and test environments when the workday is finished and require a specialized tag, which would prevent an instance from being shut down after hours or on the weekend. You can also automate the process of restarting instances 30 to 60 minutes before the workday begins so that there is ample time for operating systems to become functional, allowing the team to think that they had never been turned off in the first place. Just be sure to watch out for any EC2 instances running on the instance store that may lose data.

Measuring overall efficiency

One of the most evident ways that organizations lose efficiency when it comes to their cloud budgets is neglecting to decommission unused assets. Although it is easier to spin up new services and create backups in the cloud, not having a plan to retire depreciated data, volume backups, machine images, log files, and other items adds to the bottom line of the monthly bill. This should be done with a scalpel and not a machete. Data, once deleted, is gone and irretrievable; however, there is no need to keep everything forever. Even with compliance, there is a fade-out period, and data can be stored in cold storage at a much more reasonable rate.

A perfect example is EBS snapshots. A customer who is trying to be proactive about data protection may be both snapshotting volumes multiple times per day as well as copying those snapshots to a Disaster Recovery region. If there is no way to depreciate the old snapshots after 30, 60, or 90 days, then this cost center item can become an issue rather quickly.

Stop spending money on undifferentiated heavy lifting

When we talk about heavy lifting, we're talking about racking, stacking, and cooling servers in a data center. Running a data center is a 24/7/365 endeavor, and you can't easily turn off machines and storage when you're not using them. Moving workloads to the cloud takes the onus of running those data centers off your team members and allows more time and effort to go into focusing on customer needs and features, rather than caring for and feeding servers and hardware.

Analyzing and attributing expenditure

The cloud – and the AWS cloud, in particular – has tools available to help you analyze and reference where the charges for your account(s) are coming from. The first tool in your toolbox is tags and a tagging strategy. Once you have decided on a solid set of base tags, including things such as cost center, department, and application, then you have a foundation for the rest of the tools that are available to you.

Breaking out from a single account structure into multiple accounts and organizational units using AWS Organizations can automatically categorize spend, even without the use of tags at the account level.

AWS Cost Explorer allows your financial management team to dig into the services and regions where spend is occurring, as well as create automatic dashboards to visualize the spend quickly. Amazon Web Services also has pre-set service quotas in place, some of which are hard quotas that cannot be changed, but many of which are soft quotas that allow you to raise the number of particular services (in a region) in a simple request.

You have been reading a chapter from
AWS Certified DevOps Engineer - Professional Certification and Beyond
Published in: Nov 2021
Publisher: Packt
ISBN-13: 9781801074452
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image