Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Strategizing Continuous Delivery in the Cloud

You're reading from   Strategizing Continuous Delivery in the Cloud Implement continuous delivery using modern cloud-native technology

Arrow left icon
Product type Paperback
Published in Aug 2023
Publisher Packt
ISBN-13 9781837637539
Length 208 pages
Edition 1st Edition
Arrow right icon
Authors (2):
Arrow left icon
Garima Bajpai Garima Bajpai
Author Profile Icon Garima Bajpai
Garima Bajpai
Thomas Schuetz Thomas Schuetz
Author Profile Icon Thomas Schuetz
Thomas Schuetz
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Part 1: Foundation and Preparation for Continuous Delivery in the Cloud
2. Chapter 1: Planning for Continuous Delivery in the Cloud FREE CHAPTER 3. Chapter 2: Understanding Cloud Delivery Models 4. Chapter 3: Creating a Successful Strategy and Preparing for Continuous Delivery 5. Chapter 4: Setting Up and Scaling Continuous Delivery in the Cloud 6. Part 2: Implementing Continuous Delivery
7. Chapter 5: Finding Your Technical Strategy Toward Continuous Delivery in the Cloud 8. Chapter 6: Achieving Successful Implementation with Supporting Technology 9. Chapter 7: Aiming for Velocity and Reducing Delivery Risks 10. Chapter 8: Security in Continuous Delivery and Testing Your Deployment 11. Part 3: Best Practices and the Way Ahead
12. Chapter 9: Best Practices and References 13. Chapter 10: Future Trends of Continuous Delivery 14. Chapter 11: Contributing to the Open Source Ecosystem 15. Chapter 12: Practical Assignments 16. Index 17. Other Books You May Enjoy

Cloud-based implementations of CD

All of the things we discussed in the previous sections could be implemented in your own data center. The code could be hosted in a repository, a local CI server might watch for changes, and CD tooling could deploy the applications to the production environments. In the rest of the book, however, we will mainly cover CD in the cloud. Therefore, we assume that the software is built, deployed, and operated in cloud environments. We will go more into detail on cloud characteristics and delivery models in Chapter 2. In this section, we will deal with a few technologies, as well as the benefits and drawbacks when doing CD in the cloud.

When dealing with CD and deployment, we will often refer to cloud-native applications and technologies, which are defined as follows.

Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil (the Cloud Native Computing Foundationhttps://www.cncf.io/about/who-we-are/).

As defined previously, many technologies and products may be involved when delivering cloud-native systems. We will often hear three terms: virtualization, containers, and container orchestration/Kubernetes. Let us look at them in some detail:

  • Virtualization is one of the core techniques in cloud computing and allows more efficient usage of resources and supports the implementation of typical cloud characteristics; for example, on-demand self-service, resource pooling, and rapid elasticity. Using virtualization allows the abstraction of software from hardware; typically, virtualization is referred to the usage of virtual machines, where compute resources of one physical host are partitioned into multiple virtual machines.
  • Containers provide a way to package the runtime environment, as well as the application code, into a single artifact that can run on a container runtime. While containers have existed for a long time in the industry (OpenVZ and BSD jails), Docker extended the technology to build containers in a systematic way and to easily add containerization to the development process. While virtual machines introduce some overhead into the system, as they mostly run their own kernel, containers run with almost no overhead, which leads to much better resource usage and a higher possible load of workloads on one machine.
  • Container orchestration deals with many containers in many physical environments and ensures that container workloads are not only scheduled on the right node but also get the resources they need. At the time of writing, Kubernetes is the de facto standard for container orchestration. One of the main benefits of using a container orchestration platform is the simple fault tolerance and scaling of applications. The scheduler of the platform takes care of the desired number of containers and the user’s needs, and—if the resources are available—manages them according to the configured specifications. Furthermore, the usage of such an infrastructure enforces the operators to use configurations, which may either be in the format provided by the platform or in a framework for managing deployments for this platform. As a result, the used infrastructure is documented in code and can be recreated and duplicated with little effort. Orchestration platforms generally provide load-balancing mechanisms and the possibility to define health and readiness checks, which makes it easy for users to build auto-scaling applications.

When deploying to cloud-based systems, we also must think about the infrastructure. To define the infrastructure, the term Infrastructure as Code has emerged, which is the practice of describing the target state of the infrastructure in declarative language. One of the major challenges we will also cover in this book is the convergence of the infrastructure and the application deployment.

Every major cloud service provider provides a framework for deploying applications automatically to their environment. There are Infrastructure-as-Code modules available for various toolsets and proprietary solutions. As a further consequence, each cloud provider provides a managed code repository and CI tooling to build software from code and container registries to store the container images. Last but not least, there are many possibilities for running containers/applications in the cloud providers, such as a managed Kubernetes service:

Service

AWS

GCP

Azure

Code Repository (Git)

Code Commit

Cloud Source

Azure Repos

CI Pipeline

Code Pipeline

Cloud Build

Azure Pipelines

Container Registry

Elastic Container Registry

Container Registry

Azure Container Registry

Managed Kubernetes

Elastic Kubernetes Service (EKS)

Google Kubernetes Engine (GKE)

Azure Kubernetes Service (AKS)

Table 1.3 — Examples of cloud services for CD

In this section, we discussed a few basics of cloud-based CD implementation and services, which can be consumed through cloud providers. As we progress through this book, we will use the resources provided by these cloud providers, as well as tools that can be installed on cloud provider resources.

You have been reading a chapter from
Strategizing Continuous Delivery in the Cloud
Published in: Aug 2023
Publisher: Packt
ISBN-13: 9781837637539
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image