Cloud-based implementations of CD
All of the things we discussed in the previous sections could be implemented in your own data center. The code could be hosted in a repository, a local CI server might watch for changes, and CD tooling could deploy the applications to the production environments. In the rest of the book, however, we will mainly cover CD in the cloud. Therefore, we assume that the software is built, deployed, and operated in cloud environments. We will go more into detail on cloud characteristics and delivery models in Chapter 2. In this section, we will deal with a few technologies, as well as the benefits and drawbacks when doing CD in the cloud.
When dealing with CD and deployment, we will often refer to cloud-native applications and technologies, which are defined as follows.
Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil (the Cloud Native Computing Foundation – https://www.cncf.io/about/who-we-are/).
As defined previously, many technologies and products may be involved when delivering cloud-native systems. We will often hear three terms: virtualization, containers, and container orchestration/Kubernetes. Let us look at them in some detail:
- Virtualization is one of the core techniques in cloud computing and allows more efficient usage of resources and supports the implementation of typical cloud characteristics; for example, on-demand self-service, resource pooling, and rapid elasticity. Using virtualization allows the abstraction of software from hardware; typically, virtualization is referred to the usage of virtual machines, where compute resources of one physical host are partitioned into multiple virtual machines.
- Containers provide a way to package the runtime environment, as well as the application code, into a single artifact that can run on a container runtime. While containers have existed for a long time in the industry (OpenVZ and BSD jails), Docker extended the technology to build containers in a systematic way and to easily add containerization to the development process. While virtual machines introduce some overhead into the system, as they mostly run their own kernel, containers run with almost no overhead, which leads to much better resource usage and a higher possible load of workloads on one machine.
- Container orchestration deals with many containers in many physical environments and ensures that container workloads are not only scheduled on the right node but also get the resources they need. At the time of writing, Kubernetes is the de facto standard for container orchestration. One of the main benefits of using a container orchestration platform is the simple fault tolerance and scaling of applications. The scheduler of the platform takes care of the desired number of containers and the user’s needs, and—if the resources are available—manages them according to the configured specifications. Furthermore, the usage of such an infrastructure enforces the operators to use configurations, which may either be in the format provided by the platform or in a framework for managing deployments for this platform. As a result, the used infrastructure is documented in code and can be recreated and duplicated with little effort. Orchestration platforms generally provide load-balancing mechanisms and the possibility to define health and readiness checks, which makes it easy for users to build auto-scaling applications.
When deploying to cloud-based systems, we also must think about the infrastructure. To define the infrastructure, the term Infrastructure as Code has emerged, which is the practice of describing the target state of the infrastructure in declarative language. One of the major challenges we will also cover in this book is the convergence of the infrastructure and the application deployment.
Every major cloud service provider provides a framework for deploying applications automatically to their environment. There are Infrastructure-as-Code modules available for various toolsets and proprietary solutions. As a further consequence, each cloud provider provides a managed code repository and CI tooling to build software from code and container registries to store the container images. Last but not least, there are many possibilities for running containers/applications in the cloud providers, such as a managed Kubernetes service:
Service |
AWS |
GCP |
Azure |
Code Repository (Git) |
Code Commit |
Cloud Source |
Azure Repos |
CI Pipeline |
Code Pipeline |
Cloud Build |
Azure Pipelines |
Container Registry |
Elastic Container Registry |
Container Registry |
Azure Container Registry |
Managed Kubernetes |
Elastic Kubernetes Service (EKS) |
Google Kubernetes Engine (GKE) |
Azure Kubernetes Service (AKS) |
Table 1.3 — Examples of cloud services for CD
In this section, we discussed a few basics of cloud-based CD implementation and services, which can be consumed through cloud providers. As we progress through this book, we will use the resources provided by these cloud providers, as well as tools that can be installed on cloud provider resources.