Book Image

Continuous Delivery with Docker and Jenkins - Second Edition

By : Rafał Leszko
Book Image

Continuous Delivery with Docker and Jenkins - Second Edition

By: Rafał Leszko

Overview of this book

Continuous Delivery with Docker and Jenkins, Second Edition will explain the advantages of combining Jenkins and Docker to improve the continuous integration and delivery process of an app development. It will start with setting up a Docker server and configuring Jenkins on it. It will then provide steps to build applications on Docker files and integrate them with Jenkins using continuous delivery processes such as continuous integration, automated acceptance testing, and configuration management. Moving on, you will learn how to ensure quick application deployment with Docker containers along with scaling Jenkins using Kubernetes. Next, you will get to know how to deploy applications using Docker images and testing them with Jenkins. Towards the end, the book will touch base with missing parts of the CD pipeline, which are the environments and infrastructure, application versioning, and nonfunctional testing. By the end of the book, you will be enhancing the DevOps workflow by integrating the functionalities of Docker and Jenkins.
Table of Contents (18 chapters)
Title Page
Dedication
About Packt
Contributors
Preface
Index

The automated deployment pipeline


We already know what the CD process is and why we use it. In this section, we describe how to implement it.

Let's start by emphasizing that each phase in the traditional delivery process is important. Otherwise, it would never have been created in the first place. No one wants to deliver software without testing it first! The role of the UAT phase is to detect bugs and to ensure that what developers created is what the customer wanted. The same applies to the operations team—the software must be configured, deployed to production, and monitored. That's out of the question. So, how do we automate the process so that we preserve all the phases? That is the role of the automated deployment pipeline, which consists of three stages, as presented in the following diagram:

The automated deployment pipeline is a sequence of scripts that is executed after every code change committed to the repository. If the process is successful, it ends up with deployment to the production environment.

Each step corresponds to a phase in the traditional delivery process, as follows:

  • Continuous Integration: This checks to make sure that the code written by different developers is integrated
  • Automated Acceptance Testing: This checks if the client's requirements are met by the developers implementing the features. This testing also replaces the manual QA phase.
  • Configuration Management: This replaces the manual operations phase; it configures the environment and deploys the software

Let's take a deeper look at each phase to understand its responsibility and what steps it includes.

Continuous Integration (CI)

The CI phase provides the first feedback to developers. It checks out the code from the repository, compiles it, runs unit tests, and verifies the code quality. If any step fails, the pipeline execution is stopped and the first thing the developers should do is fix the CI build. The essential aspect of this phase is time; it must be executed in a timely manner. For example, if this phase took an hour to complete, developers would commit the code faster, which would result in the constantly failing pipeline.

The CI pipeline is usually the starting point. Setting it up is simple because everything is done within the development team, and no agreement with the QA and operations teams is necessary.

Automated acceptance testing

The automated acceptance testing phase is a suite of tests written together with the client (and QAs) that is supposed to replace the manual UAT stage. It acts as a quality gate to decide whether a product is ready for release. If any of the acceptance tests fail, pipeline execution is stopped and no further steps are run. It prevents movement to the configuration management phase and, hence, the release.

The whole idea of automating the acceptance phase is to build the quality into the product instead of verifying it later. In other words, when a developer completes the implementation, the software is already delivered together with acceptance tests that verify that the software is what the client wanted. That is a large shift in thinking in relation to testing software. There is no longer a single person (or team) who approves the release, but everything depends on passing the acceptance test suite. That is why creating this phase is usually the most difficult part of the CD process. It requires close cooperation with the client and creating tests at the beginning (not at the end) of the process.

Note

Introducing automated acceptance tests is especially challenging in the case of legacy systems. We discuss this topic in greater detail in Chapter 9, Advanced Continuous Delivery.

There is usually a lot of confusion about the types of tests and their place in the CD process. It's also often unclear as to how to automate each type, what the coverage should be, and what the role of the QA team should be in the development process. Let's clarify it using the Agile testing matrix and the testing pyramid.

The Agile testing matrix

Brian Marick, in a series of his blog posts, made a classification of software tests in the form of the agile testing matrix. It places tests in two dimensions—business or technology-facing, and support programmers or a critique of the product. Let's have a look at that classification:

Let's comment briefly on each type of test:

  • Acceptance Testing (automated): These are tests that represent functional requirements seen from the business perspective. They are written in the form of stories or examples by clients and developers to agree on how the software should work.
  • Unit Testing (automated): These are tests that help developers to provide high-quality software and minimize the number of bugs.
  • Exploratory Testing (manual): This is the manual black-box testing, which tries to break or improve the system.
  • Non-functional Testing (automated): These are tests that represent system properties related to performance, scalability, security, and so on.

This classification answers one of the most important questions about the CD process: what is the role of a QA in the process?

Manual QAs perform the exploratory testing, so they play with the system, try to break it, ask questions, and think about improvements. Automation QAs help with non-functional and acceptance testing; for example, they write code to support load testing. In general, QAs don't have their special place in the delivery process, but rather a role in the development team.

Note

In the automated CD process, there is no longer a place for manual QAs who perform repetitive tasks.

You may look at the classification and wonder why you see no integration tests there. Where are they up to Brian Marick, and where to put them in the CD pipeline?

To explain it well, we first need to mention that the meaning of an integration test differs depending on the context. For (micro) service architectures, they usually mean exactly the same as acceptance testing, as services are small and need nothing more than unit and acceptance tests. If you build a modular application, then integration tests usually mean component tests that bind multiple modules (but not the whole application) and test them together. In that case, integration tests place themselves somewhere between acceptance and unit tests. They are written in a similar way to acceptance tests, but are usually more technical and require mocking not only external services, but also internal modules. Integration tests, similar to unit tests, represent the code point of view, while acceptance tests represent the user point of view. As regards the CD pipeline, integration tests are simply implemented as a separate phase in the process.

The testing pyramid

The previous section explained what each test type represents in the process, but mentioned nothing about how many tests we should develop. So, what should the code coverage be in the case of unit testing? What about acceptance testing?

To answer these questions, Mike Cohn, in his book, created a so-called testing pyramid. Let's look at the diagram to develop a better understanding of this:

When we move up the pyramid, the tests become slower and more expensive to create. They often require user interfaces to be touched and a separate test automation team to be hired. That is why acceptance tests should not target 100% coverage. On the contrary, they should be feature-oriented and verify only selected test scenarios. Otherwise, we would spend a fortune on test development and maintenance, and our CD pipeline build would take ages to execute.

The case is different at the bottom of the pyramid. Unit tests are cheap and fast, so we should strive for 100% code coverage. They are written by developers, and providing them should be a standard procedure for any mature team.

I hope that the agile testing matrix and the testing pyramid clarified the role and the importance of acceptance testing.

Let's now move to the last phase of the CD process, configuration management.

Configuration management

The configuration management phase is responsible for tracking and controlling changes in the software and its environment. It involves taking care of preparing and installing the necessary tools, scaling the number of service instances and their distribution, infrastructure inventory, and all tasks related to application deployment.

Configuration management is a solution to the problems posed by manually deploying and configuring applications on the production. This common practice results in an issue whereby we no longer know where each service is running and with what properties. Configuration management tools (such as Ansible, Chef, or Puppet) enable us to store configuration files in the version control system and track every change that was made on the production servers.

An additional effort to replace manual tasks of the operation's team is to take care of application monitoring. That is usually done by streaming logs and metrics of the running systems to a common dashboard, which is monitored by developers (or the DevOps team, as explained in the next section).