Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Accelerating DevSecOps on AWS
Accelerating DevSecOps on AWS

Accelerating DevSecOps on AWS : Create secure CI/CD pipelines using Chaos and AIOps

eBook
€21.99 €31.99
Paperback
€38.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Accelerating DevSecOps on AWS

Chapter 1: CI/CD Using AWS CodeStar

This chapter will first introduce you to the basic concepts of Continuous Integration/Continuous Deployment (or Continuous Delivery) (CI/CD) and a branching strategy. Then, we will implement basic CI/CD for a sample Node.js application using Amazon Web Services (AWS) CodeStar, which will deploy the application in Elastic Beanstalk. We will begin by creating a CodeStar project, then we will enhance it by adding develop and feature branches in a CodeCommit repository. We will also add a manual approval process as well as a production stage in CodePipeline. We will also spin up the production environment (modifying a CloudFormation template) so that the production stage of the pipeline can deploy the application. After that, we will create two lambda functions that will validate the Pull Request (PR) raised from the feature branch to develop branch, by getting the status of the CodeBuild project. Doing this entire activity will give you an overall idea of AWS Developer Tools (CodeCommit, CodeBuild, and CodePipeline) and how to implement a cloud-native CI/CD pipeline.

In this chapter, we are going to cover the following main topics:

  • Introduction to CI/CD, along with a branching strategy
  • Creating a project in AWS CodeStar
  • Creating feature and development branches, as well as an environment
  • Validating PRs/Merge Requests (MRs) into the develop branch from the feature branch via CodeBuild and AWS Lambda
  • Adding a production stage and environment

Technical requirements

To get started, you will need an AWS account and the source code of this repository, which can be found at https://github.com/PacktPublishing/Accelerating-DevSecOps-on-AWS/tree/main/chapter-01.

Introduction to CI/CD, along with a branching strategy

In this section of the chapter, we will dig into what exactly CI/CD is and why is it so important in the software life cycle. Then, we will learn about a branching strategy, and how we use it in the source code repository to make the software delivery more efficient, collaborative, and faster.

CI

Before getting to know about CI, let's have a brief look at what happens in a software development workflow. Suppose you are working independently, and you have been asked to develop a web application that is a chat system. So, the first thing you will be doing is to create a Git repository and write your code in your local machine, build the code, and run some tests. If it works fine in your local environment, you will then push it to a remote Git repository. After that, you will build this code for different environments (where the actual application will run) and put the artifact in the artifact registry. After that, you will deploy that artifact into the application server where your application will be running.

Now, suppose the frontend of your application is not too good, and you want some help from your frontend developer. The frontend developer will clone the code repository, then contribute to the repository either by modifying the existing code or adding new code. After that, they will commit the code and push it into the repository. Then again, the same steps of build and deploy will take place, and your application will be running with the new User Interface (UI). Now, what if you and the frontend developer both want to enhance the application, whereby both of you will be writing the code, and somehow you both used the same file and did your own changes, and tried to push back to the repository? If there are no conflicts, then your Git repository will allow you to update the repository, but in case there are any conflicts then it will highlight this to you. Now, once your code repository is updated, you must again build the code and run some unit tests. If the tests find a bug, then the build process will fail and you or the frontend developer will need to fix the bug, and again run the build and unit test. Once this passes, you will then need to put the build artifact into the artifact registry and then later deploy it into the application server. But this whole manual process of building, testing, and making the artifact ready for deployment will become quite troublesome and slow when your application gets bigger, and collaborators will increase, which in return will slow the deployment of your application. These problems of slow feedback and a manual process will easily put the project off schedule. To solve this, we have a CI process.

CI is a process where all the collaborators/developers contribute their code, several times a day, in a central repository that is further integrated into an automated system that pulls the code from the repository, builds it, runs unit tests, fails the build, gives feedback in case there are bugs, and prepares the artifact so that it is deployment-ready. The process is illustrated in the following diagram:

Figure 1.1 – CI process

Figure 1.1 – CI process

CI makes sure that software components or services work together. The integration process should take place and complete frequently. This increases the frequency of developer code commits and reduces the chances of non-compatible code and redundant efforts. To implement a CI process, we need—at the very least—the following tools:

  • Version Control System (VCS)
  • Build tool
  • Artifact repository manager

While implementing a CI process, our code base must be under version control. Every change applied to the code base must be stored in a VCS. Once the code is version controlled, it can be accessed by the CI tool. The most widely used VCS is Git, and in this book, we will also be using Git-based tools. The next requirement for CI is a build tool, which basically compiles your code and provides the executable file in an automated way. The build tool depends on the technology stack; for instance, for Java, the build tool will be Maven or Ant, while for Node.js, it will be npm. Once an executable file gets generated by the build tool, it will be stored in the artifact repository manager. There are lots of tools available in the market—for example, Sonatype Nexus Repository Manager (NXRM) or JFrog. We will be using the AWS Artifact service. The whole CI workflow will be covered in detail after the Branching strategy (Gitflow) section.

CD

CD is a process where the generated executable files or packages (in the CI process) are installed or deployed on application servers in an automated manner. So, CI is basically the first step toward achieving CD.

There is a difference between continuous deployment and delivery, but most of the time, you will be seeing or implementing continuous delivery, especially when you are in the financial sector or any critical business.

Continuous delivery is a process whereby, after all the steps of CI, building and testing then deploying to the application server happens with human intervention. Human intervention means either clicking a button on Build Tools to deploy or allowing a slack bot by approving it. The continuous deployment process differs slightly, whereby the deployment of a successful build to the application server takes place in an automated way and without human intervention.

The processes are illustrated in the following diagram:

Figure 1.2 – CD processes

Figure 1.2 – CD processes

On reading up to this point, you must be thinking that the CI process is more complex than the CD process, but the CD process is trickier when deploying an application to the production server, especially when it is serving thousands to millions of end-user customers. Any bad experience with the application that is running on your production server may lose customers, which results in a loss for the business. For instance, version 1 (v1) of your application is running live right now and your manager has asked you to deploy version v1.1, but during the deployment, some problem occurred, and somehow v1.1 is not running properly, so you then have to roll back to the previous version, v1. So, all these things now need to be planned and automated in a deployment strategy.

Some CD strategies used in DevOps methodologies are mentioned in the following list:

  • Blue-green deployment
  • Canary deployment
  • Recreate deployment
  • A/B testing deployment

Let's have a look at these strategies in brief.

Blue-green deployment

A blue-green deployment is a deployment strategy or pattern where we can reduce downtime by having two production environments. It provides near-zero-downtime rollback capabilities. The basic idea of a blue-green deployment is to shift the traffic from the current environment to another environment. The environments will be identical and run the same application but will have different versions.

You can see an illustration of this strategy in the following diagram:

Figure 1.3 – Blue-green demonstration

Figure 1.3 – Blue-green demonstration

In the preceding diagram, we can see that initially, the live application was running in the blue environment, which has App v1; later, it got switched to green, which is running the latest version of the application, App v2.

Canary deployment

In a canary deployment strategy, applications or services get deployed in an incremental manner to a subset of users. Once this subset of users starts using an application or service, then important application metrics are collected and analyzed to decide whether the new version is good to go ahead at full scale to be rolled to all users or needs to roll back for troubleshooting. All infrastructure in production environments is updated in small phases (for example, 10%, 20%, 50%, 75%, 100%).

You can see an illustration of this strategy in the following diagram:

Figure 1.4 – Canary deployment phases

Figure 1.4 – Canary deployment phases

Let's move on to the next strategy.

Recreate deployment

With this deployment strategy, we stop the older version of an application before deploying the newer version. For this deployment, downtime of service is expected, and a full restart cycle is executed.

You can see an illustration of this strategy in the following diagram:

Figure 1.5 – Recreate deployment steps

Figure 1.5 – Recreate deployment steps

Let's have a look at the next deployment strategy.

A/B testing deployment

A/B testing is a deployment whereby we run different versions of the same application/services simultaneously, for experimental purposes, in the same environment for a certain period. This strategy consists of routing the traffic of a subset of users to a new feature or function, then getting their feedback and metrics, and after that comparing this with the older version. After comparing the feedback, the decision-maker will update the entire environment with the chosen version of the application/services.

You can see an illustration of this strategy in the following diagram:

Figure 1.6 – A/B testing demonstration

Figure 1.6 – A/B testing demonstration

So far, we got to see the deployment strategies, but we do not deploy the application in the production server just after having the build artifact ready from the CI process. We deploy and test the application in various environments and, post success, we deploy in the production environment. We will now see how application versions relate to branches and environments.

Branching strategy (Gitflow)

In the preceding two sections, we got to know about CI and CD, but it is not possible to have a good CI and CD strategy if you do not have a good branching strategy. However, what does branching strategy mean and what exactly are branches?

Whenever a developer writes code in a local machine, after completing the code, they upload/push it to a VCS (Git). The reason for using a VCS is to store the code so that it can be used by other developers and can be tracked and versioned. When a developer pushes the code to Git for the first time, it goes to the master/main branch. A branch in Git is an independent line of development and it serves as an abstraction for the edit/commit/stage process. We will explore the Gitflow branching strategy with a simple example.

Suppose we have a project to implement a calculator. The calculator will have functions of addition, subtraction, multiplication, division, and average. We have three developers (Lily, Brice, and Geraldine) and a manager to complete this project. The manager has asked them to deliver the addition function first. Lily quickly developed the code, built and tested it, pushed it to the Git main/master branch, and tagged it with version 0.1, as illustrated in the following screenshot. The code in the main/master Git branch always reflects the production-ready state, meaning the code for addition will be running in the production environment.

Figure 1.7 – Master branch with the latest version of the code

Figure 1.7 – Master branch with the latest version of the code

Now, the manager has asked Brice to start the development of subtraction and multiplication functions as the major functionality for a calculator project for the next release and asked Geraldine to develop division and average functions as a functionality for a future release. Thus, the best way to move ahead is to create a develop branch that will be an exact replica of the working code placed in the master branch. A representation of this branch can be seen in the following screenshot:

Figure 1.8 – Checking out the develop branch from master

Figure 1.8 – Checking out the develop branch from master

So, once a develop branch gets created out of the master, it will have the latest code of the addition function. Now, since Brice and Geraldine must work on their task, they will create a feature branch out of the develop branch. Feature branches are used by developers to develop new features for upcoming releases. It branches off from the develop branch and must merge into the develop branch back once the development of the new functionality completes, as illustrated in the following screenshot:

Figure 1.9 – Creating feature branches from the develop branch

Figure 1.9 – Creating feature branches from the develop branch

While Brice (responsible for subtraction and multiplication) and Geraldine (responsible for division and average) have been working on their functionality and committing their branches, the manager has found a bug in the current live production environment and has asked Lily to fix that bug. It is never a good practice and is not at all recommended to fix any bug in a production environment. So, what Lily will have to do is to create a hotfix branch from the master branch, fix the bug in code, then merge it into the master branch as well as the develop branch. Hotfix branches are required to take immediate action on the undesired status of the master branch. It must branch off from the master branch and, after fixing the bug, must merge into the master branch as well as the develop branch so that the current develop branch does not have that bug and can deploy smoothly in the next release cycle. Once the fixed code gets merged into the master branch, it gets a new minor version tag and is deployed to the production environment.

This process is illustrated in the following screenshot:

Figure 1.10 – Checking out hotfix from master and later merging into the master and develop branches

Figure 1.10 – Checking out hotfix from master and later merging into the master and develop branches

Now, once Brice completes his development (subtraction and multiplication), he will then merge his code from the feature branch into the develop branch. But before he merges, he needs to raise a PR/MR. As the name implies, this requests the maintainer of the project to merge the new feature into the develop branch, after reviewing the code. All companies have their own requirements and policies enforced before a merge. Some basic requirements to get a feature merged into the develop branch are that the feature branch should get built successfully without any failures and must have passed a code quality scan.

You can see an example of a PR/MR in the following diagram:

Figure 1.11 – PR raised from the feature branch, post-approval, merged into the develop branch

Figure 1.11 – PR raised from the feature branch, post-approval, merged into the develop branch

Once Brice's code (subtraction and multiplication feature) is accepted and merged into the develop branch, then the release process will take place. In the release process, the develop branch code gets merged to the release branch. The release branch basically supports preparation for a new production release. The code in the release branch gets deployed to an environment that is similar to a production environment. That environment is known as staging (pre-production). A staging environment not only tests the application functionality but also tests the load on the server in case traffic increases. If any bugs are found during the test, then these bugs need to be fixed in the release branch itself and merged back into the develop branch.

The process is illustrated in the following screenshot:

Figure 1.12 – Merging the code into release branch from develop branch, fixing bugs (if any), and then merging back into develop branch

Figure 1.12 – Merging the code into release branch from develop branch, fixing bugs (if any), and then merging back into develop branch

Once all the bugs are fixed and testing is successful, the release branch code will get merged into the master branch, then tagged and deployed to the production environment. So, after that, the application will have three functions: addition, subtraction, and multiplication. A similar process will take place for the new features of division and average developed by Geraldine, and finally, the version will be tagged as 1.1 and deployed in the production environment, as illustrated in the following diagram:

Figure 1.13 – Merging code from the release branch into the master branch

Figure 1.13 – Merging code from the release branch into the master branch

These are the main branches in the whole life cycle of an application:

  • Master
  • Develop

But during a development cycle, the supporting branches, which do not persist once the merge finishes, are shown here:

  • Feature
  • Hotfix
  • Release

Since we now understand branching and CI/CD, let's club all the pieces together, as follows:

Figure 1.14 – CI/CD stages

Figure 1.14 – CI/CD stages

So, in the preceding diagram, we can see that when a developer finishes their work in a feature branch, they try to raise a PR to the develop branch and the CI pipeline gets triggered. The CI pipeline is nothing but an automated flow or process in any CI tool such as Jenkins/AWS CodePipeline. This CI pipeline will validate whether the feature branch meets all the criteria to get merged into the develop branch. If the CI pipeline runs and build successfully, then the lead maintainer of the project will merge the feature branch into the develop branch, where another automated CI pipeline will trigger and try to deploy the new feature in the development environment. This whole process is known as CI (colored in blue in the preceding diagram). Post-deployment in the development environment, some automated test runs on top of it. If everything goes well and all the metrics look good, then the develop branch gets merged into the staging branch. During this merge process, another automated pipeline gets triggered, which deploys the artifact (uploaded during the develop branch CI process) into the staging environment. The staging environment is generally a close replica of the production environment, where some other tests such as Dynamic Application Security Testing (DAST) and load stress testing take place. If all the metrics and data from the staging environment look good, then staging of the branch code gets merged into the master branch and tagged as a new version.

If the maintainer deploys the tagged artifact in the production environment, then it is considered as continuous delivery. If the deployment happens without any intervention, then it is considered as continuous deployment.

So far, we have learned how application development and deployment take place. The preceding concept of CI/CD and branching strategies were quite important to be familiar with to move ahead and understand the rest of the chapters. In the next section, we will be learning about the AWS-managed CI/CD service CodeStar and will use it to create and deploy a project in development, staging, and production environments.

Creating a project in AWS CodeStar

In this section of the chapter, we will understand the core components of AWS CodeStar and will create a project and replace the existing project code with our own application code.

Introduction to AWS CodeStar

AWS CodeStar is a managed service by AWS that enable developers to quickly develop, build, and deploy an application on AWS. It provides all the necessary templates and interfaces to get you started. This service basically gives you an entire CI/CD toolchain within minutes using a CloudFormation stack. This service is quite good for any start-up companies that want to focus only on business logic and do not want to spend any time on setting up an infrastructure environment. It is so cloud-centric that it is integrated with the Cloud9 editor to edit your application code and CloudShell to perform terminal/shell-related action. AWS CodeStar is a free service, but you will be paying for other resources that get provisioned with it—for example, if you use this service to deploy your application on an Elastic Compute Cloud (EC2) instance, then you will not pay to use AWS CodeStar but will pay for the EC2 instance. AWS CodeStar is tightly integrated with other AWS developer tools mentioned next.

AWS CodeCommit

AWS CodeCommit is a VCS that is managed by AWS, where we can privately store and manage code in the cloud and integrate it with AWS. It is a highly scalable and secure VCS that hosts private Git repositories and supports the standard functionality of Git, so it works very well with your existing Git-based tools.

AWS CodeBuild

When it comes to cloud-hosted and fully managed build services that compile our source code, run unit tests, and produce artifacts that are ready to deploy, then AWS CodeBuild comes into the picture.

AWS CodeDeploy

AWS CodeDeploy is a deployment service provided by AWS that automates the deployment of an application to an Amazon EC2 instance, Elastic Beanstalk, and on-premises instances. It provides the facility to deploy unlimited variants of application content such as code, configuration, scripts, executable files, multimedia, and much more. CodeDeploy can deploy application files stored in Amazon Simple Storage Service (S3) buckets, GitHub repositories, or Bitbucket repositories.

AWS CodePipeline

AWS CodePipeline comes into the picture when you want to automate all the software release process. AWS CodePipeline is a CD and release automation service that helps smoothen deployment. You can quickly configure the different stages of a software release process. AWS CodePipeline automates the steps required to release software changes continuously.

Getting ready

Before jumping into creating a project in AWS CodeStar, note the following points:

  • The project template that we will be selecting is a Node.js web application.
  • We will be using Elastic Beanstalk as application compute infrastructure.
  • Create a Virtual Private Cloud (VPC) with a private subnet and an EC2 key pair that we will be using later.
  • If you are using Elastic Beanstalk for the first time in your AWS account, make sure the t2.micro instance type has enough Central Processing Unit (CPU) credit (see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html).

Let's get started by following these next steps:

  1. Log in to the AWS Management Console by going to this site: https://aws.amazon.com/console/.
  2. Go to the search box and search for AWS CodeStar, click on the result, and this will redirect you to AWS CodeStar intro/home page.
  3. Click on Create project, and you will be redirected to Choose a project template page where you will see information on how to create a service role. Click on Create service role, as illustrated in the following screenshot:
Figure 1.15 – Service role prompt

Figure 1.15 – Service role prompt

  1. Post that, you will see a green Success message, as illustrated in the following screenshot:
Figure 1.16 – Service role creation success message

Figure 1.16 – Service role creation success message

  1. Click on the dropdown of the Templates search box, then click AWS Elastic Beanstalk under AWS service, Web application under Application type, and Node.js under Programming language, as illustrated in the following screenshot:
Figure 1.17 – Selecting service, application type, and programing language

Figure 1.17 – Selecting service, application type, and programing language

  1. You will see two search results, Node.js and Express.js. We will go ahead with Node.js by clicking on the radio button and then on Next, as illustrated in the following screenshot:
Figure 1.18 – Selecting Node.js web application template

Figure 1.18 – Selecting Node.js web application template

  1. We will be redirected to another page called Set up your project, where we will be entering northstar in the Project name field. This will auto-populate the Project ID and Repository name fields. We will be using CodeCommit for the code repository. In EC2 Configuration, we will be going ahead with t2.micro and will select the available VPC and subnet. After that, we will select an existing key pair that we should already have access to and then click Next, as illustrated in the following screenshot:
Figure 1.19 – CodeStar project setup details

Figure 1.19 – CodeStar project setup details

  1. Post that, we will be reviewing all the information related to the project and will proceed to click Create project. This process will take almost 10-15 minutes to set up the CI/CD toolchain and Elastic Beanstalk and deploy the sample Node.js application in Elastic Beanstalk. During this process, we can go to CloudFormation, search for the awscodestar-northstar stack, and see all the resources that are getting provisioned, as illustrated in the following screenshot:
Figure 1.20 – CodeStar resources creation in CloudFormation page

Figure 1.20 – CodeStar resources creation in CloudFormation page

  1. We can also have a look at the Elastic Beanstalk resource by going to the Environments view of the Elastic Beanstalk console, as illustrated in the following screenshot:
Figure 1.21 – Elastic Beanstalk page showing northstarapp

Figure 1.21 – Elastic Beanstalk page showing northstarapp

  1. After 10-15 minutes, we will keep monitoring the project main page. Once the View application button gets enabled, this means that the creation of a project, including a CI/CD toolchain and an environment infrastructure, and the deployment of an application have been completed. We can access the application by clicking on the View application button.
  2. This will redirect us to a Node.js sample web application, as illustrated in the following screenshot:
Figure 1.22 – Default Node.js web application page

Figure 1.22 – Default Node.js web application page

Now, before replacing the sample application with our own application, let's get to know what exactly happened at the backend, as follows:

  • CodeStar triggers an AWS CloudFormation stack to create an entire CI/CD toolchain and workload infrastructure.
  • The toolchain includes the following:
    • A CodeCommit repository with the master branch having a sample Node.js application
    • A CodeBuild project with the preconfigured environment to run the build
    • CodePipeline to trigger the build and deploy the application
  • The workload infrastructure includes Elastic Beanstalk with one EC2 instance.
  • IAM roles with certain permissions that allow CloudFormation to perform actions on other services.

To replace the sample application with our own sample application, perform the following steps:

  1. Our sample code provided by AWS CodeStar resides in AWS CodeCommit, but we are not going to edit the application code in CodeCommit directly; instead, we will use the AWS Cloud9 Integrated Development Environment (IDE) tool. Switch to the CodeStar console from Elastic Beanstalk. We need to create an IDE environment by going to the IDE tab, as illustrated in the following screenshot, and clicking on Create environment:
Figure 1.23 – CodeStar project page at IDE tab

Figure 1.23 – CodeStar project page at IDE tab

  1. The second page will ask you for the environment configuration for the Cloud9 IDE. This IDE environment will stop automatically if it is in an ideal state for 30 minutes to save on cost. Once you fill in all the details, click on Create environment, as illustrated in the following screenshot:

Figure 1.24 – Cloud9 environment configuration

Figure 1.24 – Cloud9 environment configuration

  1. After 10-15 minutes, you will be able to see the Cloud9 IDE environment available for you to open it and start using it. Click on Open IDE to get to the Cloud9 IDE.
  2. The following screenshot shows you the Cloud9 IDE. This IDE will automatically clone the code from CodeCommit and show it in the cloud9 explorer. It also comes with its own shell:
Figure 1.25 – Cloud9 IDE console

Figure 1.25 – Cloud9 IDE console

  1. Go to the shell of the editor and type the following commands. These commands will set the Git profile with your name and then clone our own application code:
    $ git config –global user.name <your username>
    $ git clone https://github.com/PacktPublishing/Modern-CI-CD-on-AWS.git
  2. Once you clone the application code, you will be able to see a folder with Accelerating-DevSecOps-on-AWS on the left side, as illustrated in the following screenshot. AWS-CodeStar, in your case it will be Accelerating-DevSecOps-on-AWS folder. That folder contains all the application code and some lambda code that we will use in this chapter:
Figure 1.26 – Cloud9 IDE with the cloned Git folder

Figure 1.26 – Cloud9 IDE with the cloned Git folder

  1. Now, type the following command into the shell to replace the application code:
    $ cd northstar
    $ rm -rf package.json app.js public tests
    $ cd ../Modern-CI-CD-on-AWS/chapter-01
    $ cp -rpf server.js source static package.json ../../northstar
  2. After that, go to the editor and edit the buildspec.yml file. Comment line 16 where it is using the npm test command because our application does not include a test case for now.
  3. After that, push the code into CodeCommit by typing the following commands:
    $ cd ../../northstar
    $ git add .
    $ git commit -m "Replacing application code"
    $ git push origin master
  4. The moment we push the code into CodeCommit, CodePipeline will trigger automatically. This will pull the latest code from CodeCommit and pass it to CodeBuild, where the build will take place using buildspec.yml, and then it will deploy to Elastic Beanstalk using the template.yml CloudFormation template. The process is illustrated in the following screenshot:
Figure 1.27 – northstar code pipeline

Figure 1.27 – northstar code pipeline

  1. Once the pipeline completes successfully, go to the CodeStar Northstar project page, as illustrated in the following screenshot, and click on View application button in the top right. Alternatively, you can also access the new application via the Elastic Beanstalk page:
Figure 1.28 – New Node.js application deployed by CodePipeline

Figure 1.28 – New Node.js application deployed by CodePipeline

So, in this section, we created a CodeStar project, created an IDE environment, and replaced the sample application with our own application. Basically, the current flow looks like this:

Figure 1.29 – Current CodePipeline steps

Figure 1.29 – Current CodePipeline steps

We made changes in the master branch, which triggered the CodePipeline. Then, CodePipeline invoked CodeCommit to fetch the source code from the master branch, then invoked CodeBuild to build the application code, and then uploaded the artifact into an S3 bucket. Then, the CloudFormation stage did an infra check and then deployed into the current Elastic Beanstalk environment. At this stage, Elastic Beanstalk is based on a single EC2 instance. In the next three sections, we will be modifying the CI/CD pipeline and following the diagram shown here:

Figure 1.30 – CI/CD steps to be implemented in next three sections

Figure 1.30 – CI/CD steps to be implemented in next three sections

Based on the preceding diagram, we will be performing the following tasks in the next three sections of the chapter:

  1. Creating two extra branches—a develop branch and a feature branch.
  2. Then, we will create a code pipeline with respect to the develop branch that gets triggered once code is pushed into the develop branch. This pipeline will check out the code from the develop branch, build the application, push the artifact into an S3 bucket and then spin up a separate development Elastic Beanstalk environment, and then deploy the application in the development environment.
  3. We will then create a CodeBuild project that will trigger on PR.
  4. Based on the PR build status, we will merge the code from the feature to the develop branch.
  5. We'll then modify the existing pipeline that listens to the master branch. We will modify the existing environment name to staging and add a production stage, in which we spin up a production Elastic Beanstalk environment (two EC2 instances via an Auto Scaling Group (ASG) and an Elastic Load Balancer (ELB)), and then deploy the application in the production environment.

In the next section, we will create feature and development branches in AWS CodeCommit.

Creating feature and development branches, as well as an environment

In this section, we will be creating feature and develop branches. Post that, we will create a project in CodePipeline that triggers when there is a commit in the develop branch. In the CodePipeline project, we will also be using a stage that uses CloudFormation to spin up a new Elastic Beanstalk development environment and then deploy the application in the development environment.

Creating feature and develop branches

To create feature and develop branches in CodeCommit, follow these next steps:

  1. Go to the AWS Cloud9 console shell and type the following commands:
    Nikit:~/environment/northstar (master) $ git checkout -b feature/image                                                                                                             
    Switched to a new branch 'feature/image'
    Nikit:~/environment/northstar (feature/image) $ git push origin feature/image
    Total 0 (delta 0), reused 0 (delta 0)
    To https://git-codecommit.us-east-1.amazonaws.com/v1/repos/northstar
     * [new branch]      feature/image -> feature/image
  2. Once you perform the steps, you will be able to see a feature branch in the CodeCommit Branches section, as illustrated in the following screenshot:
Figure 1.31 – CodeCommit console with the feature branch

Figure 1.31 – CodeCommit console with the feature branch

  1. Similarly, we need to create a develop branch by running the following commands:
    Nikit:~/environment/northstar (feature/image) $ git checkout -b develop
    Switched to a new branch 'develop'
    Nikit:~/environment/northstar (develop) $ git push origin develop
    Total 0 (delta 0), reused 0 (delta 0)
    To https://git-codecommit.us-east-1.amazonaws.com/v1/repos/northstar
     * [new branch]      develop -> develop
  2. Again, you can go to the CodeCommit console to verify the presence of a develop branch. Both the develop branch and feature branch contain the latest code from the master branch.

We have created two branches: a feature branch and a develop branch. Now, let's create a development environment and pipeline.

Creating a development environment and pipeline

Since CodeStar uses a CloudFormation template to create an environment, we need to modify the existing CloudFormation template to create a development environment. Perform the following steps to replace the existing CloudFormation template:

  1. Go to the AWS Cloud9 shell, navigate to the northstar folder, and make sure you are in the develop branch. Now, copy a file named codestar-EBT-cft.yaml present in the AWS-CodeStar folder to the current northstar directory by running the following commands:
    Nikit:~/environment/northstar (develop) $ cp ../Modern-CI-CD-on-AWS/chapter-01/codestar-EBT-cft.yaml .
    Nikit:~/environment/northstar (develop) $ mv codestar-EBT-cft.yaml template.yml
  2. Now, push the new file into CodeCommit, as follows:
    Nikit:~/environment/northstar (develop) $ git add template.yml
    Nikit:~/environment/northstar (develop) $ git commit -m "adding development environment"
    [develop 20dd426] adding development environment
     Committer: nikit <[email protected]>
     1 file changed, 235 insertions(+), 95 deletions(-)
    Nikit:~/environment/northstar (develop) $ git push origin develop
    Enumerating objects: 5, done.
    Counting objects: 100% (5/5), done.
    Compressing objects: 100% (3/3), done.
    Writing objects: 100% (3/3), 1.59 KiB | 1.59 MiB/s, done.
    Total 3 (delta 1), reused 0 (delta 0)
    To https://git-codecommit.us-east-1.amazonaws.com/v1/repos/northstar
       aca723e..20dd426  develop -> develop

You must be wondering which changes we have made that in that template.yml CloudFormation template. During the start of the CodeStar project, the template.yml CloudFormation template includes the resource configuration of Elastic Beanstalk. But to add a new environment, you must add those new environment resource configurations to this template file.

Important Note

At this point, if you compare the template.yml file of the master branch and the develop branch, you will see that we are creating two different environments, develop and prod, and renaming the existing one to staging.

Since we have done the code changes, let's create a development pipeline that will also spin up a development environment. Follow these next steps:

  1. We need to create a role that allows CloudFormation to performany action on other services. By default, CodeStar creates a CodeStarWorkerCloudFormationRolePolicy role through the awscodestar-<projectname> CloudFormation stack—in our case, awscodestar-northstar. We need to modify this stack so that we can allow the role to give some extra permissions to the resources that we will be creating. Proceed as follows:
    1. Copy the contents of the permission.yaml file in the AWS CodeStar repository. Now, go to CloudFormation console and click on Stacks, then search for awscodestar-northstar, as illustrated in the following screenshot:
Figure 1.32 – CloudFormation stack lists

Figure 1.32 – CloudFormation stack lists

  1. Select awscodestar-northstar, and then click on Update on the right-hand side.
  2. After that, select the Edit template in designer radio button and click on View in Designer, as illustrated in the following screenshot:
Figure 1.33 – CloudFormation stack update

Figure 1.33 – CloudFormation stack update

  1. We will be redirected to the Template designer page. Switch the template language from JavaScript Object Notation (JSON) to YAML Ain't Markup Language (YAML), then replace the entire content that you copied from the Permission.yaml file. Then, click on the Validate Template icon (tick in a square box) to validate the template. If the template is valid, then click on the Create stack icon (upward arrow inside a cloud).
  2. You will then be redirected to the previous Update stack page with the S3 Uniform Resource Locator (URL), as illustrated in the following screenshot. Click on Next to update the changes:
Figure 1.34 – Replacing the existing template

Figure 1.34 – Replacing the existing template

  1. Verify the stack details and click on Next, then again click on Next. Review all the stack changes, then check the acknowledgment box in the Capabilities section. After that, click on Update stack, as illustrated in the following screenshot:
Figure 1.35 – Confirming change set

Figure 1.35 – Confirming change set

  1. Carrying out the preceding steps will update the CloudFormation role with the new permission we added.
  1. Once we have updated the policy inside role, we then need to create a pipeline by cloning the existing pipeline. Go to the CodePipeline console and click on northstar-Pipeline (see Figure 1.27), and then click on Clone pipeline.
  2. In the Clone pipeline configuration, rename the pipeline northstar-Pipeline-dev, and under Service role, select Existing Service role and CodeStarWorker-northstar-ToolChain. Under Artifact store, choose Custom location and select the existing bucket name that refers to the development pipeline. Under Encryption key, select Default AWS Managed Key and click on Clone, as illustrated in the following screenshot:
Figure 1.36 – Pipeline clone configuration

Figure 1.36 – Pipeline clone configuration

  1. You need to stop the execution of the pipeline because we need to make further changes to point the pipeline to the develop branch. To stop the execution of the pipeline, click on Stop Execution, mention the execution number, select Stop and abandon, then provide a comment, and finally, click on Stop.
  2. Now, to make the changes to the development pipeline, we need to edit the pipeline by clicking on the Edit button, as illustrated in the following screenshot:
Figure 1.37 – Pipeline edit stages

Figure 1.37 – Pipeline edit stages

  1. We will be able to edit multiple stages here, and we need to modify each stage. Click on the Edit stage of Source, then click on the icon highlighted in the following screenshot:
Figure 1.38 – Editing stage configuration

Figure 1.38 – Editing stage configuration

  1. In the Edit action page, mention develop under Branch name and rename the output artifact northstar-devSourceArtifact, as illustrated in the following screenshot. Then, click on Done to save the action and click on Done again to save the stage:
Figure 1.39 – Source action group configuration

Figure 1.39 – Source action group configuration

  1. Similarly, edit the build stage. In the Edit action page, select northstar-devSourceArtifact under Input artifacts and rename the output artifact northstar-devBuildArtifact, as illustrated in the following screenshot. Then, click on Done to save the action and click on Done again to save the stage:
Figure 1.40 – Build action group configuration

Figure 1.40 – Build action group configuration

  1. Again, click on Edit for the deploy stage. There will be two action groups, GenerateChangeSet and ExecuteChangeSet. Edit GenerateChangeSet first. Select northstar-devBuildArtifact under Input artifacts. Modify the stack name to awscodestar-northstar-infrastructure-dev. Enter pipeline-changeset under Change set name. Select northstar-devBuildArtifact under Template | Artifact name as well as under Template configuration | Artifact name. Under the Advanced section, add a new "Stage":"Dev" key-value pair, then click on Done. The process is illustrated in the following screenshot:
Figure 1.41 – Generating change set action group configuration

Figure 1.41 – Generating change set action group configuration

  1. Edit the ExecuteChangeSet action group. Just modify Stack name to awscodestar-northstar-infrastructure-dev. Under Change set name, keep pipeline-changeset, and after that, click on Done to save the action group. Then, click on Done to save the stage.
  2. After that, click on Save. Ignore the ValidationError message shown in the following screenshot:
Figure 1.42 – Saving the development pipeline

Figure 1.42 – Saving the development pipeline

  1. At this stage, our development pipeline is ready to get executed. We will trigger this pipeline by modifying the code in the develop branch. Go to the Cloud9 IDE and select the default.jade file in northstar/source/templates. Edit the body page title to AWS CODESTAR-DEV. Save and push to the develop branch.

The process is illustrated in the following screenshot:

Figure 1.43 – Modifying the code for the develop branch

Figure 1.43 – Modifying the code for the develop branch

  1. The moment code gets pushed to the develop branch, it will trigger northstar-Pipeline-develop. You can also see the commit message in the pipeline. This pipeline fetches the code from the develop branch, then does the build using the steps mentioned in buildspec.yml. After the build, it generates the artifact and pushes it to the S3 bucket. Then, at the deploy stage, it basically creates a CloudFormation change set using the parameter we passed, then it executes the change set. The change set basically includes the creation of a development environment, which is a single-instance Elastic Beanstalk environment, and the deployment of an application in the development environment.
  2. Once the pipeline finishes, you can go to the Elastic Beanstalk console and look for northstarappDev, as illustrated in the following screenshot, then click on that environment:
Figure 1.44 – Elastic Beanstalk console showing new development environment

Figure 1.44 – Elastic Beanstalk console showing new development environment

  1. You will be redirected to the environment page, where you can access the application by clicking on the link, as illustrated in the following screenshot:
Figure 1.45 – Development environment console page

Figure 1.45 – Development environment console page

  1. You will be able to see the updated application, as illustrated here:
Figure 1.46 – Development environment Node.js web application

Figure 1.46 – Development environment Node.js web application

So, we have created a feature branch, a develop branch, a development pipeline, and a development environment. In the next section, we will validate the PR raised from the feature branch to the develop branch using CodeBuild and a Lambda function.

Validating PRs/MRs into the develop branch from the feature branch via CodeBuild and AWS Lambda

In this section, we will basically implement a solution that gives the status of the build of the PR raised in CodeCommit. This helps the maintainer see that the PR raised is at least passing all the builds and tests. Let's have a look at the solution and understand the flow, as follows:

Figure 1.47 – Flow diagram of the solution

Figure 1.47 – Flow diagram of the solution

The following steps explains the flow of diagram:

  1. When a developer finishes their work in the feature branch, they will then raise a PR/MR to the develop branch.
  2. A CloudWatch event that is watching our repository will get triggered, and that will invoke the TriggerCodeBuildStart lambda function by passing some information.
  3. This TriggerCodeBuildStart lambda function will use the CloudWatch information and trigger an AWS CodeBuild Project to our latest commit. After that, it will create a custom message that we want on our PR activity.
  4. Once this CodeBuild event finishes, another CloudWatch event will send those build results and comments to another lambda function (TriggerCodeBuildResult).
  5. This TriggerCodeBuildResult lambda function will comment the build result on the PR in the CodeCommit activity.

To set up the solution, perform the following steps:

  1. Go to the CodeBuild section of the project and click on Create Build Project.
  2. Enter the following information:
    1. Project name: northstar-pr
    2. Description: PR-Build
    3. Repository: northstar; Branch: feature/image
    4. Environment Image: Managed Image; OS: Amazon Linux 2; Runtime: Standard; Image: aws/codebuild/amazonlinux2-x86_64-standard:3.0; Environment type: Linux
    5. Service Role: New service role
    6. Buildspec: Inset build commands; Build Commands: npm install && nohup npm start &
    7. Leave the rest at their default settings and click on Create build project.

The CodeBuild console of the northstar-pr project is shown in the following screenshot:

Figure 1.48 – CodeBuild console of northstar-pr project

Figure 1.48 – CodeBuild console of northstar-pr project

  1. Go to the AWS Lambda console and under Create a function, select Author from scratch and give the name TriggerCodebuildStart and Node.js 12.x under Runtime. In the Permissions section, you need to select a role that has permission to access AWS CloudWatch Logs, CodeBuild, and CodeCommit. You can create a policy using the lambdapermission.json file and attach it to the role.

An overview of the process is shown in the following screenshot:

Figure 1.49 – Creating a lambda function

Figure 1.49 – Creating a lambda function

  1. Go to the Code Source section and modify the index.js file. We will be using the source code present in the Accelerating-DevSecOps-on-AWS/chapter-01 folder that we downloaded for our sample application. There is a folder called TriggerCodeBuildStart that includes index.js and package.json files. Copy and paste both the files into this Lambda function and click on Deploy, as illustrated in the following screenshot:
Figure 1.50 – Lambda function code editor

Figure 1.50 – Lambda function code editor

  1. After that, go to the Configuration section and click on Environment variable to add the environment variables. The lambda function code uses three environment variables shown in the following screenshot. Click on Save once you have entered the three environment variables:
Figure 1.51 – Environment variables for TriggerCodebuildStart lambda function

Figure 1.51 – Environment variables for TriggerCodebuildStart lambda function

  1. Similarly, create another lambda function, TriggerCodebuildResult, with the code available in the TriggerCodeBuildResult folder. Deploy the Lambda function and go to the Configuration section to enter the environment variable, as illustrated in the following screenshot:
Figure 1.52 – Environment variable for TriggerCodebuildResult lambda function

Figure 1.52 – Environment variable for TriggerCodebuildResult lambda function

  1. Once we have created our Lambda function, we need to create CloudWatch event rules. Go to the CloudWatch console, click on Events on the left-hand side, and then click on Rule. After that, click on Create rule.
  2. Once you click on Create rule, you will be redirected to the Event Source section. Click on Edit in Event Pattern Preview, as illustrated in the following screenshot:
Figure 1.53 – CloudWatch rule creation with event pattern

Figure 1.53 – CloudWatch rule creation with event pattern

  1. You will get a box where you need to paste the following event pattern and then click Save:
    {
      "source": [
        "aws.codecommit"
      ],
      "detail-type": [
        "CodeCommit Pull Request State Change"
      ],
      "resources": [
        "arn:aws:codecommit:us-east-1:<Your accountID>:northstar"
      ],
      "detail": {
        "event": [
          "pullRequestCreated",
          "pullRequestSourceBranchUpdated"
        ]
      }
    }
  2. In the Targets section, select Lambda function in the dropdown, and then select the TriggerCodebuildStart function, as illustrated in the following screenshot:

Figure 1.54 – CloudWatch target

Figure 1.54 – CloudWatch target

  1. Click on Configure details to proceed to Step 2, where you need to give the rule a name and a description. Name the rule TriggerValidatePRCodeBuildStart and then save it.
  2. Similarly, create another CloudWatch rule, naming it TriggerValidatePRCodeBuildResult and giving it the following event pattern, with the target being the TriggerCodebuildResult Lambda function:
    {
      "source": [
        "aws.codebuild"
      ],
      "detail-type": [
        "CodeBuild Build State Change"
      ],
      "detail": {
        "project-name": [
          "northstar-pr"
        ],
        "build-status": [
          "FAILED",
          "SUCCEEDED"
        ]
      }
    }
  3. We now have two CloudWatch rules, TriggerValidatePRCodeBuildStart and TriggerValidatePRCodeBuildResult, as we can see in the following screenshot:
Figure 1.55 – CloudWatch rules

Figure 1.55 – CloudWatch rules

  1. We are all set up with the solution. Now, to test this solution, we need to modify the feature/image branch and create a PR to the develop branch. We will modify the northstar/source/templates/default.jade file, save it, and push it, as illustrated in the following screenshot:
Figure 1.56 – Editing feature branch code for PR to develop branch

Figure 1.56 – Editing feature branch code for PR to develop branch

  1. Now, let's create a PR from the CodeCommit console. Choose feature/image under Source and develop under Destination. Enter Raising PR for Codestar-PR for Title and Description and click on Create pull request, as illustrated in the following screenshot:
Figure 1.57 – Raising PR via CodeCommit

Figure 1.57 – Raising PR via CodeCommit

  1. If you go to the Activity section of Pull requests, you can see a comment in Activity history, as illustrated in the following screenshot:
Figure 1.58 – PR status

Figure 1.58 – PR status

  1. Meanwhile, you can see the CodeBuild logs or the CodeBuild project by going to the following screen:
Figure 1.59 – Build status

Figure 1.59 – Build status

  1. Once the build is successful, you can see the build status on the Activity page, as illustrated in the following screenshot:
Figure 1.60 – PR build status

Figure 1.60 – PR build status

  1. Once you see that the build related to the PR commit is successful, you can then merge the code to develop from feature/image by clicking on Merge (Fast forward merge), which will eventually trigger a development pipeline and deploy the new changes into the development environment, as illustrated in the following screenshot:
Figure 1.61 – northstar develop code pipeline

Figure 1.61 – northstar develop code pipeline

  1. After that, you can go to Elastic Beanstalk and open the northstarappdev endpoint, and you can then see the change on the home page, as illustrated in the following screenshot:
Figure 1.62 – Modified web application running in the development environment

Figure 1.62 – Modified web application running in the development environment

So far, we have a feature branch and an associated CodeBuild project when a PR is raised and a develop branch with its own development pipeline and environment. In the next section, we will modify the existing pipeline that came by default during the start of the project. We will rename the environment as staging and create a new production stage and environment.

Adding a production stage and environment

In this section, we will add a production stage to the existing pipeline and will also modify the CloudFormation template to spin up a separate production environment with two EC2 instances via an ASG under a load balancer.

Modifying the pipeline

Currently, our main pipeline looks like the one shown in Figure 1.26. The Elastic Beanstalk environment spun up by this pipeline is named northstarapp, and we need to change it to northstarappstaging. After that, we need to add a manual approval stage, and then a production deployment stage. In the production deployment stage, we will add a configuration parameter in CloudFormation to spin up a production environment with the name northstarappprod and deploy the application in this new environment.

To modify the pipeline, follow these next steps:

  1. Go to northstar-Pipeline CodePipeline project (see Figure 1.27) and click on Edit.
  2. Click on Edit stage in the Edit: Deploy screen, as illustrated in the following screenshot:
Figure 1.63 – Editing exiting deploy stage of pipeline

Figure 1.63 – Editing exiting deploy stage of pipeline

  1. Edit the GenerateChangeSet action group, go to Advanced | Parameter overrides, and add one key value in JSON format: "Stage":"Staging", as illustrated in the following screenshot. Also, copy and paste the entire JSON config into a separate note because we will be using that in the production parameter. Click on Done to save the configuration. Click on Done to save the Deploy stage:
Figure 1.64 – Modifying parameter to be used by CloudFormation stack

Figure 1.64 – Modifying parameter to be used by CloudFormation stack

  1. Add a new stage by clicking on Add stage. Give a stage name of Approval, as illustrated in the following screenshot:
Figure 1.65 – Adding approval stage to the pipeline

Figure 1.65 – Adding approval stage to the pipeline

  1. Click on Add action group, then enter ManualApproval under Action name and Manual approval under Action provider, as illustrated in the following screenshot. You can configure a Simple Notification Service (SNS) topic, but we are skipping this here. Click on Done to save the action group:
Figure 1.66 – Adding approval action group

Figure 1.66 – Adding approval action group

  1. Click on Add stage to add a production deploy stage. Name the stage ProdDeploy. Click on Add action group. Enter GenerateChangeSet under Action name, AWS CloudFormation under Action provider, and northstar-BuildArtifact under Input artifacts. Then, click on Create or replace a change set under Action mode and enter awscodestar-northstar-infrastructure-prod under Stack name, pipeline-changeset under Change set name, northstar-BuildArtifact under Template | Artifact name, and template-export.yml under File name. Select Use configuration file, then enter northstar-BuildArtifact under Template configuration | Artifact name, template-configuration.json under File name, CAPABILITY_NAMED_IAM under Capabilities, and CodeStarWorker-northstar-CloudFormation under Role name. Click on the Advanced section and paste the JSON content that we copied in Step 3. Set the last Stage value to Prod and click on Done.

The process is illustrated in the following screenshot:

Figure 1.67 – GenerateChangeSet action group configuration

Figure 1.67 – GenerateChangeSet action group configuration

  1. Click again on Add action group. For the action name, enter ExecuteChangeSet, and enter AWS CloudFormation under Action provider. Under Action mode, we need to select Execute a change set. Under Stack name, we need to enter awscodestar-northstar-infrastructure-prod. Under Change set name, we need to enter pipeline-changeset.

The process is illustrated in the following screenshot:

Figure 1.68 – ExecuteChangeSet action group configuration

Figure 1.68 – ExecuteChangeSet action group configuration

  1. Save the pipeline. Now, raise a PR from develop to master and merge the code to master, which will run the northstar-Pipeline pipeline. This pipeline will rename the existing environment from northstarapp to northstarappStaging and deploy the application. Then, we manually need to check the application. If the application is working fine, then we need to approve it to proceed to the ProdDeploy stage. In the ProdDeploy stage, CloudFormation will spin up a northstarappProd Elastic Beanstalk production environment then deploy the application in the northstarappProd production environment.

You can see the PR being raised in the following screenshot:

Figure 1.70 – Merging PR

Figure 1.69 – Raising PR

  1. Merge the PR from develop to the master branch, as illustrated in the following screenshot:

Figure 1.70 – Merging PR

  1. The pipeline will get triggered after the merge process, as illustrated in the following screenshot:
Figure 1.71 – Pipeline triggered the moment merge finishes

Figure 1.71 – Pipeline triggered the moment merge finishes

  1. The initially created northstarapp environment will be terminated and a new northstarappStaging environment will be created, as illustrated in the following screenshot:
Figure 1.72 – northstarapp is terminated and a new northstarappStaging environment is set up

Figure 1.72 – northstarapp is terminated and a new northstarappStaging environment is set up

  1. You can access the staging application by navigating to the northstarappStaging Elastic Beanstalk environment, as illustrated in the following screenshot:
Figure 1.73 – Application running in the staging environment

Figure 1.73 – Application running in the staging environment

  1. In the pipeline, it's waiting for approval. Approve it by entering a comment, as illustrated in the following screenshot:
Figure 1.74 – The first screen shows waiting for manual approval while the second screen shows the approval process

Figure 1.74 – The first screen shows waiting for manual approval while the second screen shows the approval process

Figure 1.74 – The first screen shows waiting for manual approval while the second screen shows the approval process

  1. Once the ProdDeploy stage is successful, you can go to Elastic Beanstalk and search for northstarappProd, as illustrated in the following screenshot:
Figure 1.75 – Elastic Beanstalk console showing production environment

Figure 1.75 – Elastic Beanstalk console showing production environment

  1. You can access the application by clicking the endpoint, as illustrated in the following screenshot:
Figure 1.76 – Application running in the production environment

Figure 1.76 – Application running in the production environment

  1. You can also go to Load balancer to check the new ELB with two EC2 instances attached to it, as illustrated in the following screenshot:
Figure 1.77 – Load balancer console showing the instances attached to it

Figure 1.77 – Load balancer console showing the instances attached to it

So, we just saw how to modify a pipeline and add a production stage. You can also make it more comprehensive by creating an SNS topic during the approval stage, and also by adding an ELB Domain Name System (DNS) in Route 53. You can make this change via a CloudFormation template.

Summary

So, in this chapter, we learned how to use AWS CodeStar to implement a CI/CD pipeline that covers feature, develop, and master branches and their respective environments. We also learned how to modify the AWS CodePipeline stages and validate a PR using a lambda function and CodeBuild. If you are in a cloud-native company and want to create a CI/CD pipeline with cloud-native resources, you can easily do that now without worrying about the CI/CD toolchain servers. In the next chapter, we will learn how to enforce policies on infrastructure code.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Master the full AWS developer toolchain for building high-performance, resilient, and powerful CI/CD pipelines
  • Get to grips with Chaos engineering, DevSecOps, and AIOps as applied to CI/CD
  • Employ the latest tools and techniques to build a CI/CD pipeline for application and infrastructure

Description

Continuous integration and continuous delivery (CI/CD) has never been simple, but these days the landscape is more bewildering than ever; its terrain riddled with blind alleys and pitfalls that seem almost designed to trap the less-experienced developer. If you’re determined enough to keep your balance on the cutting edge, this book will help you navigate the landscape with ease. This book will guide you through the most modern ways of building CI/CD pipelines with AWS, taking you step-by-step from the basics right through to the most advanced topics in this domain. The book starts by covering the basics of CI/CD with AWS. Once you’re well-versed with tools such as AWS Codestar, Proton, CodeGuru, App Mesh, SecurityHub, and CloudFormation, you’ll focus on chaos engineering, the latest trend in testing the fault tolerance of your system. Next, you’ll explore the advanced concepts of AIOps and DevSecOps, two highly sought-after skill sets for securing and optimizing your CI/CD systems. All along, you’ll cover the full range of AWS CI/CD features, gaining real-world expertise. By the end of this AWS book, you’ll have the confidence you need to create resilient, secure, and performant CI/CD pipelines using the best techniques and technologies that AWS has to offer.

Who is this book for?

This book is for DevOps engineers, engineering managers, cloud developers, and cloud architects. Basic experience with the software development life cycle, DevOps, and AWS is all you need to get started.

What you will learn

  • Use AWS Codestar to design and implement a full branching strategy
  • Enforce Policy as Code using CloudFormation Guard and HashiCorp Sentinel
  • Master app and infrastructure deployment at scale using AWS Proton and review app code using CodeGuru
  • Deploy and manage production-grade clusters using AWS EKS, App Mesh, and X-Ray
  • Harness AWS Fault Injection Simulator to test the resiliency of your app
  • Wield the full arsenal of AWS Security Hub and Systems Manager for infrastructure security automation
  • Enhance CI/CD pipelines with the AI-powered DevOps Guru service
Estimated delivery fee Deliver to Sweden

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 28, 2022
Length: 520 pages
Edition : 1st
Language : English
ISBN-13 : 9781803248608
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Sweden

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Publication date : Apr 28, 2022
Length: 520 pages
Edition : 1st
Language : English
ISBN-13 : 9781803248608
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 116.97
Accelerating DevSecOps on AWS
€38.99
Cloud Security Handbook
€35.99
AWS Certified DevOps Engineer - Professional Certification and Beyond
€41.99
Total 116.97 Stars icon
Banner background image

Table of Contents

14 Chapters
Section 1:Basic CI/CD and Policy as Code Chevron down icon Chevron up icon
Chapter 1: CI/CD Using AWS CodeStar Chevron down icon Chevron up icon
Chapter 2: Enforcing Policy as Code on CloudFormation and Terraform Chevron down icon Chevron up icon
Chapter 3: CI/CD Using AWS Proton and an Introduction to AWS CodeGuru Chevron down icon Chevron up icon
Section 2:Chaos Engineering and EKS Clusters Chevron down icon Chevron up icon
Chapter 4: Working with AWS EKS and App Mesh Chevron down icon Chevron up icon
Chapter 5: Securing Private EKS Cluster for Production Chevron down icon Chevron up icon
Chapter 6: Chaos Engineering with AWS Fault Injection Simulator Chevron down icon Chevron up icon
Section 3:DevSecOps and AIOps Chevron down icon Chevron up icon
Chapter 7: Infrastructure Security Automation Using Security Hub and Systems Manager Chevron down icon Chevron up icon
Chapter 8: DevSecOps Using AWS Native Services Chevron down icon Chevron up icon
Chapter 9: DevSecOps Pipeline with AWS Services and Tools Popular Industry-Wide Chevron down icon Chevron up icon
Chapter 10: AIOps with Amazon DevOps Guru and Systems Manager OpsCenter Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(11 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




David G Jun 09, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book provides a solid foundation for those developers and engineers starting in their DevSecOps journey as well as a comprehensive look at the workflows and tooling for the more experienced engineers. The book walks through the concepts of building, scaling and securing infrastructure and toolsets with modern systems including serverless and container based solutions. The author details out the entire design, build, secure and operate lifecycle showing different toolsets how to use them and what they are capable of providing in order to deliver scalable, reliable, secure and maintainable solutions.Providing easy to follow examples that are instructive and practical, utilizing both native AWS services as well as third party solutions to provide capabilities throughout the system lifecycle.
Amazon Verified review Amazon
MGF May 30, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The timing could not be better when I was offered a copy of Accelerating DevSecOps on AWS: Create secure CI/CD pipelines using Chaos and AIOps book by Nikit Swarajin exchange for an honest review. Being an Observability and Monitoring Subject Matter Expert, I have been doing AIOps research and study with the goal of enhancing AWS service owner teams’ Monitoring and Ops workflow in a major high-tech company.While the book offers practical information in an extensive variety of subjects applicable to the AWS world, my primary focus was on learning AIOps.The chapter of the book dedicated to AIOps with Amazon DevOps Guru (a service powered by Machine Learning) and Systems Manager OpsCenter starts from the explanation of core Machine Learning (ML) and Artificial Intelligence (AI) concepts. It introduces AIOps and their importance in IT operations and defines essential ML and AI terms. Mr. Swaraj emphasized what new challenges the contemporary IT operations face compared to ten years ago.I appreciated the conciseness of the theoretical materials, followed by a more hands-on part where you learn about AWS AIOps tool DevOps Guru and its integration with Systems Manager OpsCenter. The theoretical materials are supported by the practical experiments that the reader can perform to gain a solid conceptual understanding. You will learn how to detect different types of anomalies, caused by injecting failure into an example application running in an EKS cluster.Though highly technical and detailed, the book is notable for its clarity and brevity. The information I learned from the book has helped me to be better oriented in the world of AIOps. I’ve already recommended it to my Summer Internship mentee for their AIOps research, and would highly recommend it to any DevOps engineer who strives to learn practical tools and improve operational flow at their company.
Amazon Verified review Amazon
Guru Jul 02, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book is an interesting exploration of various concepts from blue-green deployments to managing security and deployments at AWS. The book follows a cookbook approach wherein you can go in to the problem you are trying to solve, understand the steps outlined and start implementing in your setup.I would recommend developers trying to get a good understanding into deployment practices to consider this book, also experienced infrastructure professionals might find this book as a good reference for adding to their toolkit.
Amazon Verified review Amazon
Chris Phillips Jun 06, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Full disclosure, I know the author and he asked me to read it.But this is a great starting point to get on your devsecops journey on AWS.
Amazon Verified review Amazon
Durga Jun 04, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book explains exactly what it says. I've been using AWS for a few years now, primarily in AI/ML and MLOps, wanted to branch out a little and learn more about DevSecOps. This book provides a very detailed, step by step practical guide on implementing DevSevOps on AWS. To be honest, I was not familiar with many of the tools used, and as the other reviews mention, this book contains a crisp and precise introduction for them. I did some additional reading to familiarize myself and deep dive into some of the additional services and concepts (App Mesh, Helm, FIS and the industry-wide security tools are completely new to me) used in the book. That said, I found it extremely helpful to have an idea of what to look for and learn for a specific solution - instead of searching for random articles on Google.And lastly, the associated repo on GitHub is helpful and easy to follow along. I learn by doing rather than reading, and it helped to have an easily accessible repository. I do believe this knowledge will help me build better and more secure ML applications!
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela