Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
DevOps with Windows Server 2016
DevOps with Windows Server 2016

DevOps with Windows Server 2016: Obtain enterprise agility and continuous delivery by implementing DevOps with Windows Server 2016

eBook
AU$50.99 AU$72.99
Paperback
AU$90.99
Subscription
Free Trial
Renews at AU$24.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

DevOps with Windows Server 2016

Chapter 1.  Introducing DevOps

Change is the only constant in life is something I have been hearing since I was a child. I never understood the saying; school remained the same, the curriculum was the same for years, home was the same, and friends were the same. However, once I joined my first software company, it immediately struck me that yes, Change is the only constant! Change is inevitable for any product or service, and this is amplified many times over when related to a software product, system, or service.

Software development is a complex undertaking comprising multiple processes and tools, and involves people from different departments. They all need to come together and work in a cohesive manner. With so much variability, the risks are high when delivering to the end customer. One small omission or misconfiguration and the application might come crashing down. This book is about adopting and implementing practices that reduce this risk considerably and ensure that high-quality software can be delivered to the customer again and again. This chapter is about explaining how DevOps brings people, processes, culture, and technology together to deliver software services to the customer effectively and efficiently. It is focused on the theory and concepts of DevOps. The remaining chapters will focus on realizing these concepts through practical examples using Microsoft Windows 2016 and Visual Studio Team Services.

This chapter will answer the following questions:

  • What is DevOps?
  • Why is DevOps needed?
  • What problems are resolved by DevOps?
  • What are its constituents, principles, and practices?

Before we get into the details of DevOps itself, let's understand some of the problems software companies face that are addressed by DevOps.

Software delivery challenges

There are inherent challenges when engaged in the activity of software delivery. It involves multiple people with different skills using different tools and technologies with multiple different processes. It is not easy to bring all these together in a cohesive manner. Some of these challenges are mentioned in this section. Later, in subsequent chapters, we will see how these challenges are addressed with the adoption of DevOps principles and practices.

Resistance to change

Organizations work within the realms of economic, political, and social backdrops, and they have to constantly adapt themselves to a continuously changing environment. Economic changes might introduce an increase in competition in terms of price, quality of products and services, changing marketing strategies, and mergers and acquisitions. The political environment introduces changes in legislation, which has an impact on the rules and regulation for enterprise. The tax system and international trade policies are also examples of areas in which change can have an impact. Society decides which products and services are acceptable or preferred and which are discarded. Customers demand change on a constant basis. Their needs and requirements change often and this manifests in the systems they are using. Organizations not adept at handling changes in their delivery processes and who resist making changes to their products and features eventually find themselves outdated and irrelevant. These organizations are not responsive to change. In short, the environment is ever changing and organizations perish if they do not change along with it.

Rigid processes

Software organizations with a traditional mindset release their products and services on a yearly or multi-year basis. Their software development life cycle is long and their operations do not have many changes to deploy and maintain. Customers demand more but they wait till the next release from the company. The organization is either not interested or does not have the capability to release changes faster. Meanwhile, if the competitor is able to provide more and better features faster, customers will soon shift their loyalty and start using them. The first organization will start losing customers, have reduced revenues, and fade away.

Isolated teams

Generally, there are multiple teams behind any system or service provided to the customer. Typically, there is a development team and an operations team. The development team is responsible for developing and testing the system, while the operations team is responsible for managing and maintaining the system on production. The operations team provides post-deployment services to the customer. These two teams have different skills, experience, mindset, and working culture. The charter of the development team is to develop newer features and upgrade existing ones. They constantly produce code and want to see it in production. However, the operations team is not comfortable with frequent changes. The stability of the existing environment is more important to them. There is a constant conflict between these two teams.

There is little or no collaboration and communication between these two teams. The development team often provides code artifacts to the operations team for deployment on production without helping them to understand the change. The operations team is not comfortable deploying the new changes since they are neither aware of the kind of changes coming in as part of a new release nor have confidence deploying the software. There is no proper hand-off between the development and operations teams. Often, the deployments fail on production and the operations team has to spend sleepless nights ensuring that the current deployment is either fixed or rolled back to a previous working release. Both the development and operations teams are working in silos. The development team does not treat the operations team as equivalent to itself. The operations team has no role to play in the software development life cycle, while the development team has no role to play in operations.

Monolithic design and deployments

Development goes on for multiple months before testing begins. The flow is linear and the approach is Waterfall, where the next stage in software development life cycle happens only when the prior stage is completed or nearing completion. Deployment is one giant exercise in deploying multiple artifacts on multiple servers based on documented procedures. Such practices have many inherent problems. There are a lot of features and configuration steps for large applications and everything needs to be done, in order, on multiple servers. Deploying a huge application is risky and fails when a small step is missed during deployment. It generally takes weeks to deploy a system such as this in production.

Manual execution

Software development enterprises often do not employ proper automation in their application lifecycle management. Developers tend to check-in code only after a week, the testing is manual, configuration of the environment and system is manual, and documentation is either missing or very dense, comprising hundreds of pages. The operations team follows the provided documentation to deploy the system manually on production. Often this results in a lot of downtime on production because smaller steps have been missed in deployment. Eventually, customers become dissatisfied with the services provided by the company. Also, this introduces human dependencies within the organization. If a person leaves the organization, their knowledge leaves with them and a new person has to struggle significantly to gain the same level of expertise and knowledge.

Lack of innovation

Organizations starts losing out to competition when they are not flexible to meet customer expectation with newer and upgraded products and services. The result is falling revenues and profits, eventually making them nonexistent in the marketplace. Organizations that do not innovate newer products and services consistently nor update them cannot provide exponential customer satisfaction.

What is DevOps?

Today, there is no consensus in industry regarding the definition of DevOps. Every organization has formulated their own definition of DevOps and has tried to implement it accordingly. They have their own perspective and tend to think they have implemented DevOps if they have automation in place, configuration management is enabled, they are using agile processes, or any combination thereof.

DevOps is about the delivery mechanism of software systems. It is about bringing people together, making them collaborate and communicate, working together toward a common goal and vision. It is about taking joint responsibility, accountability, and ownership. It is about implementing processes that foster a collective and service mindset. It enables a delivery mechanism that brings agility and flexibility within the organization. Contrary to popular belief, DevOps is not about tools, technology, and automation .Automation acts as an enabler to implement agile processes, induce collaboration within teams and help in delivering faster and better. 

There are multiple definitions of DevOps available on the Internet and they do not provide complete definition. DevOps does not provide a framework or methodology. It is a set of principles and practices that, when employed within an organization, engagement, or project, achieve the goal and vision of both DevOps and the organization. These principles and practices do not mandate any specific process, tools and technologies, or environment. DevOps provides guidance which can be implemented through any tool, technology, and process, although some of the technology and processes might be more appropriate than others to achieve the vision of DevOps principles and practices.

Although DevOps practices can be implemented in any organization that provides services and products to customers, for the purposes of this book, we will look at DevOps from the perspective of a software development and operations department of any organization.

So, what is DevOps? DevOps is defined as follows:

  • It is a set of principles and practices
  • It brings both the developers and operations teams together from the start of the software system
  • It provides faster and more efficient end-to-end delivery of a value to the end customer again and again in a consistent and predictable manner
  • It reduces time to market, thereby providing a competitive advantage

If you look closely at this definition of DevOps, it does not indicate or refer to any specific processes, tools, or technology. It does not prescribe any particular methodology or environment.

The goal of implementing DevOps principles and practices in any organization is to ensure that stakeholders' (including customers') demands and expectations are met efficiently and effectively.

Customers' demands and expectations are met when:

  • The customer gets the features they want
  • The customer gets the features they want, when they want
  • The customer gets faster updates on features
  • The quality of delivery is high

When an organization can meet these expectations, customers are happy and remain loyal to the organization. This in turn increases the market competitiveness of the organization, which results in bigger brand and market valuation. It has a direct impact on the top and bottom lines of the organization. The organization can invest more in innovation and customer feedback, bringing about continuous changes to its system and services in order to stay relevant.

The implementation of DevOps principles and practices in any organization is guided by its surrounding ecosystem. This ecosystem is made up of the industry and domain the organization belongs to.

Let us look in detail at these principles and practices later in this chapter.

The core principles of DevOps are as follows:

  • Collaboration and communication
  • Agility toward change
  • Software design
  • Failing fast and early
  • Innovation and continuous learning
  • Automating processes and tools

The core practices of DevOps are as follows:

  • Continuous integration
  • Configuration management
  • Continuous deployment
  • Continuous delivery
  • Continuous learning

DevOps is not a new paradigm. However, it has gained a lot of popularity and traction in recent times. Its adoption is at its highest level so far, and more and more companies are undertaking this journey. I purposely mentioned DevOps as a journey because there are different levels of maturity within DevOps. While successfully implementing continuous deployment and delivery are considered the highest level of maturity in this journey, adopting source code control and agile software development are considered among the lowest.

One of the first things DevOps talks about is breaking the barriers between the development and operations teams. It brings close collaboration between multiple teams. It is about breaking the mindset that the development team is responsible only for writing code and passing it on to operations for deployment once it is tested. It is also about breaking the mindset that operations has no role to play in development activities. Operations should influence the planning of the product and should be aware of the features coming up for release. They should also continually provide feedback to development on any operational issues so that they can be fixed in subsequent releases. They should have some influence in the design of system to improve its overall functionality. Similarly, development should help the operations team with the deployment of the system and solve incidents as and when they arise.

The definition talks about faster and more efficient end-to-end delivery of systems to stakeholders. It does not talk about how fast or efficient the delivery should be. It should be fast enough depending on the organization's domain, industry, customer segmentation, and more. For some organizations, fast enough could be quarterly, while for others it could be weekly. Both types are valid from a DevOps point of view and they can deploy any relevant processes and technology to achieve their particular goal. DevOps does not decide what that goal is. Organizations should identify the best implementation of DevOps principles and practices based on their overall project, engagement, and vision.

The definition also talks about end-to-end delivery. This means that everything from the planning and delivery of the system to the services and operations should be part of the DevOps implementation. The processes should be such that they allow for greater flexibility, modularity, and agility in the application development life cycle. While organizations are free to use a best-fit process such as Waterfall, Agile, Kanban, and more, typically organizations tend to favor agile processes with an iterations-based delivery. This allows for faster delivery in smaller units, which are far more testable and manageable compared to a large delivery.

DevOps talks about delivering software systems to the end customer again and again in a consistent and predictable manner. This means that organizations should continually deliver newer and upgraded features to the customer using automation. We cannot achieve consistency and predictability without the use of automation. Manual work should be reduced to zero to ensure a high level of consistency and predictability. The automation should also be end-to-end, to avoid failures. This also indicates that the system design should be modular, allowing faster delivery while remaining reliable, available, and scalable. Automated testing plays an important role in consistent and predictable delivery.

The result of implementing the previously mentioned practices and principles is that organizations are able to meet the expectations and demands of their customers. Such an organization can grow faster than its competition and further increase the quality and capability of their products and services through continuous innovation and improvement.

DevOps principles

DevOps is based on a set of foundational beliefs and processes. These form the pillars on which it is built and provide a natural ecosystem for the delivery of excellence within an organization. Let's look briefly into some of these principles.

Collaboration and communication

One of the prime tenets of DevOps is collaboration. Collaboration means that different teams come together to achieve a common objective. It defines clear roles and responsibilities, overall ownership, accountability, and responsibility for the team. The team comprises both development and operations people. Together they are responsible for delivering rapid high-quality releases to the end customer.

Both teams are part of the end-to-end application life cycle process. The operations team contributes to the planning process for features, providing their feedback on overall operational readiness and issues regarding business application and services. Concurrently, the development team must play a role in operational activities. They must assist in deploying the release to production and provide support in terms of fixing any production issues that arise. This kind of environment and ecosystem fosters continuous feedback and innovation. There is a shared vision, where everyone in the team are working toward common goals.

Flexible to change

Agility refers to the flexibility and adaptability of people, processes, and technology. People should have a mindset open to accepting change, playing different roles, and taking ownership and accountability. Processes would generally refer to the following:

  • Application lifecycle management
  • Development methodology
  • Software design

Application lifecycle management

Wikipedia defines application lifecycle management as follows:

Application lifecycle management (ALM) is the product lifecycle management (governance, development, and maintenance) of computer programs. It encompasses requirements management, software architecture, computer programming, software testing, software maintenance, change management, continuous integration, project management, and release management.

Application lifecycle management (ALM) refers to the management of planning, gathering requirements, building and hosting code, testing code in terms of code coverage, unit tests, versioning of code, releasing code to multiple environments, tracking and reporting, functional tests, environment provisioning, deployment to production, and operations for business applications and services. The operational aspects include monitoring, reporting, and feedback activities. Overall, ALM is a huge area and comprises multiple activities, tools, and processes. Special attention should be given to crafting appropriate application lifecycle steps to induce confidence in the final deployed system. For example, processes can be implemented which mandate that code cannot be checked in the source code repository if unit tests do not pass completely. ALM comprises multiple stages such as planning, development, testing, deployment, and operations.

In short, ALM defines a process to manage an application from conception to delivery and integrates multiple teams together to achieve a common objective. The phases of a typical application lifecycle management process is shown in Figure 1. ALM is a continuous process that starts with the planning of an iteration, building and testing the iteration, deploying it on a production environment, and providing post-deployment services to the customer.

Feedback from customers and operations is passed on to the planning team, which eventually incorporates them into subsequent iterations, and this process loop continues.

Application lifecycle management

Figure 1: Application lifecycle management phases

Development methodology

Development methodology should be flexible and elastic to enable multiple smaller iterations or sprints of delivery. Each sprint and iteration must be functionally tested. Smaller iterations help in completing specific smaller features and pushing them to production. This provides the team with a clear sense of the direction and scope of the work, raising expectations and giving them a sense of ownership over the release.

Software design

Software design should implement architectural principles that foster modularity, decomposition of large functionality into smaller features, reliability, high availability, scalability, audit capabilities, and monitoring, to name a few.

Automating processes and tools

Automation plays an important role in achieving overall DevOps goals. Without automation, DevOps cannot achieve its end objectives. Automation should be implemented for the entire application lifecycle management, from building the application, to delivery and deployment to the production environment. Automation brings trust and a high level of confidence in the output from each phase of the software development life cycle.

The probability that deliverables are of high quality, robust, and relatively risk-free is quite high. Automation also helps in the rapid delivery of a business application to multiple environments because it is capable of running multiple build processes, executing thousands of unit tests, figuring out code coverage comprising millions of lines of code, provisioning environments, deploying applications, and configuring them at the desired level.

Failing fast and early

At first glance, it seems weird to talk about failure in a DevOps book that is supposed to assist with the successful delivery of software. Trust me, it is not! Failing fast and early refers to the process of finding issues and risks as early as possible within application lifecycle development. Not knowing the issues that arise toward the end of the ALM cycle is an expensive affair because a lot of work has already been done on it. Such issues might involve making design and architectural changes, which can jeopardize the viability of the entire release. If the issues can be found at the beginning of the cycle, they can be resolved without much impact to the release. Automation plays a big part in identifying the issues early and fast.

Innovation and continuous learning

DevOps fosters a culture of innovation and continuous learning. There is a constant feedback flow regarding the good and bad, and what's working and what's not working in various environments. The feedback is used to try out different things, either to fix existing issues or find better alternatives. Through this exercise, there is a constant information flow about how to make things better and that in turn provides the impetus to find alternative solutions. Eventually, there are breakthrough findings and innovation, which can be further developed and brought to production.

DevOps practices

DevOps consists of multiple practices, each providing distinct functionality to the overall process. Figure 2 shows the relationship between them. Configuration management, continuous integration, and continuous deployment form the core practices that enable DevOps. When we deliver software services that combine these three services, we achieve continuous delivery. continuous delivery from an organization is a mature capability that depends on the maturity of its configuration management, continuous integration, and continuous deployment.

Continuous feedback at all stages forms the feedback loop that helps provide superior services to customers. It runs across all DevOps practices. Let's take a closer look at each of these capabilities and DevOps practices:

DevOps practices

Figure 2: DevOps practices and their activities

Configuration management

Software applications and services need a physical or virtual environment on which they can be deployed. Typically, the environment is an infrastructure comprising both hardware and operating system on which software can be deployed. Software applications are decomposed into multiple services running on different servers, either on-premises or in the cloud. Each service has its own application and infrastructure configuration requirement. In short, both infrastructure and application are needed to deliver software systems to customers, and each has their own configuration. If the configuration drifts, the application might not work as expected, leading to downtime and failure. Modern ALM dictates the use of multiple stages and environments on which an application should be deployed with different configurations. For example, the application will be deployed to a development environment for developers to see the result of their work. It will also be deployed to multiple test environments, with different configurations, for executing different types of tests . It would also be deployed to a preproduction environment to conduct user acceptance tests, and finally, it will be deployed on a production environment. It is important to ensure that the application can be deployed to multiple environments without undertaking any manual changes to its configuration.

Configuration management provides a set of processes and tools which help ensure that each environment and application gets its own configuration. Configuration management tracks configuration items, and anything that changes from environment to environment should be treated as a configuration item. Configuration management also defines the relationships between configuration items and how changes in one configuration item will impact another.

Configuration management helps in the following ways:

  • Infrastructure as Code: When the process of provisioning infrastructure and its configuration is represented through code, and the same code goes through the application lifecycle process, it is known as Infrastructure as Code. Infrastructure as Code helps automate the provisioning and configuration of infrastructure. It also represents the entire infrastructure in code that can be stored in a repository and version-controlled. This allows you to use previous environment configurations when needed. It also enables the provisioning of an environment multiple times in a consistent and predictable manner. All environments provisioned in this way are consistent and equal at all stages of the ALM process.
  • Deployment and configuration of an application: The deployment and configuration of an application is the next step after provisioning the infrastructure. An example of application deployment and configuration is to deploy a WebDeploy package on a server, deploy SQL server schemas and data (bacpac) on another server, and change the SQL connection string on the web server to represent the appropriate SQL server. Configuration management stores values for the application configuration for each environment on which it is deployed.

The configuration settings applied to environments and application should also be monitored. Records for expected and desired configuration along with the differences should be maintained. Any drift from this expected and desired configuration can make the application unavailable and unreliable. Configuration management is capable of finding the drift and reconfiguring the application and environment to their desired state.

With automated configuration management in place, the team does have to manually deploy and configure the environments and applications. The operations team is not dependent on the development team for deployment activities.

Another aspect of configuration management is source code control. Software comprises code, data, and configuration. Generally, team members working on an application change the same files simultaneously. The source code should be up to date at any point in time and should only be accessible by authenticated team members. The code and other artifacts themselves are configuration. Source code control helps in increased collaboration and communication within the team, since each team member is aware of other team members activities. This ensures that conflicts are resolved at an early stage.

Continuous integration

Multiple developers write code stored and maintained in a common repository. The code is normally checked-in or pushed to the repository when a developer has finished developing a feature. This can happen in a day, or it might take days or weeks. Developers might be working together on the same feature and they might also follow the same practices of pushing/checking-in code in days or weeks. This can cause issues with code quality. One of the tenets of DevOps is to fail fast. Developers should check-in/push their code to the repository often and as soon as it makes sense to check-in. The code should be compiled frequently to check that developers have not introduced any bugs inadvertently and the complete code base can be compiled at any point of time. If a developer does not follow such practices, then there is possibility of each developer having stale code in their local workstation not integrated with other developer's code. Eventually, when such stale and large codebase is integrated from all developers, it starts failing and becomes difficult and time-consuming to fix issues arising from it.

Continuous integration solves these kinds of challenges. Continuous integration helps with the compilation and validation of any code pushed/checked-in by a developer by taking it through a series of validation steps. Continuous integration creates a process flow consisting of multiple steps and is comprised of continuous automated build and continuous automated tests. Normally, the first step is the compilation of the code. After successful compilation, each step is responsible for validating the code from a specific perspective. For example, when unit tests are executed on the compiled code, code coverage can be measured to check which code paths are covered. This could reveal whether comprehensive unit tests have been written or whether there is scope to add further unit tests. The result of continuous integration is deployment packages that can be used by continuous deployment for deployment to multiple environments.

Developers are encouraged to check-in their code multiple times a day instead of after multiple days or weeks. Continuous integration initiates the execution of the build pipeline automatically as soon as the code is checked-in or pushed. When all activities comprising the build execute successfully without any errors, the build-generated artifacts are deployed to multiple environments. Although every system demands its own configuration of continuous integration, a typical example is shown in Figure 3.

Continuous integration increases the productivity of developers. They do not have to manually compile their code, run multiple types of tests one after another, and then create packages out of it. It also reduces the risk of introducing bugs into the code. It also provides early feedback to the developers about the quality of their code. Overall, the quality of deliverables is high and deliverables are delivered faster by adopting a continuous integration practice:

Continuous integration

Figure 3: Sample continuous integration process

Build automation

Build automation consists of multiple tasks executing in sequence. Generally, the first task is responsible for fetching the latest source code from the repository. The source code might comprise multiple projects and files, which are compiled to generate artifacts such as executables, dynamic link libraries, assemblies, and more. Successful build automation indicates that there are no compile-time errors in the code.

There can be more steps to build automation depending on the nature and type of a project.

Test automation

Test automation consists of tasks that are responsible for validating different aspects of code. These tasks are related to testing the code from a different perspective and are executed in sequence. Generally, the first step is to run a series of unit tests on the code. Unit testing refers to the process of testing the smallest denomination of a feature to validate its behavior in isolation from other features. It can be automated or manual. However, the preference is automated unit testing.

Code coverage is another aspect of automated testing that can be executed on code to find out how much of the code is executed while running the unit tests. It is generally represented as a percentage and refers to how much of the code is testable through unit testing. If code coverage is not close to 100 percent, it is either because the developer has not written unit tests for that behavior or the uncovered code is not required at all.

There can be more steps to test automation depending on the nature and type of a project. Successful execution of test automation resulting in no significant code failure should start executing the packaging tasks.

Application packaging

Packaging is a process of generating deployable artifacts such as MSI, NuGet, web-deploy packages, and database packages, as well as versioning them and storing them at a location such that they can be consumed by other pipelines and processes.

Continuous deployment

By the time the process reaches the stage of deployment, continuous integration has ensured that there is a functional application that can now be deployed to multiple environment for further quality checks and testing. Continuous Deployment refers to the capability to deploy applications and services to preproduction and production environments through automation. For example, Continuous Deployment could provision and configure an environment, deploy and configure an application on top of it. After conducting multiple validations, such as functional tests and performance tests, on a preproduction environment, the production environment is provisioned and configured, and the application is deployed to production environments through automation. There are no manual steps in the deployment process. Every deployment task is automated.

Continuous deployment should provision new environments or update existing environments. It should then deploy applications with newer configuration on top of it.

All the environments are provisioned through automation using principle of Infrastructure as Code. This will ensure that all environments, be it development, test, preproduction, production, or any other environment, are similar. Similarly, the application is deployed through automation, ensuring that it is also deployed uniformly across all environments. The configuration across these environments could be different depending the application.

Continuous deployment is generally integrated with continuous integration. When continuous integration has done its work by generating the final deployable packages, continuous deployment kicks in and start its own pipeline. This pipeline is called the release pipeline. The release pipeline consists of multiple environments, each consisting of tasks responsible for the provision of the environment, configuration of the environment, deploying applications, configuring applications, executing operational validation on environments, and testing the application on multiple environments. We will look at the release pipeline in greater detail in the next chapter and also in Chapter 10, Continuous Delivery and Deployment.

Employing continuous deployment provides immense benefits. There is a high degree of confidence in the overall deployment process, which helps ensure faster, risk-free releases on production. The chance of anything going wrong is drastically reduced. The team will have lower stress levels and rollback to a previous working environment is possible if there are issues with the current release:

Continuous deployment

Figure 4: Sample continuous deployment/release pipeline process

Although every system demands its own configuration of a release pipeline, a typical example of is shown in Figure 4. It is important to note that, generally, provisioning and configuring multiple environments is part of the release pipeline and approval should be sought before moving to the next environment. The approval process might be manual or automated, depending on the maturity of the organization.

Preproduction deployment

The release pipeline starts once drop is available from Continuous Integration. The steps it should perform is to get all the artifacts from the drop, either create a new environment from scratch or use an existing environment, deploy and configure applications on top of it. This environment can then be used for all kinds of testing and validation purpose.

Test automation

After deploying an application, a series of tests can be performed on the environment. One of the tests executed here is a functional test. Functional tests are primarily aimed at validating feature completeness and functionality of the application. These tests are written from requirements gathered from the customer. Another set of tests that can be executed are related to scalability and availability of the application. This typically includes load tests, stress tests, and performance tests. It should also include operational validation of the infrastructure environment.

Staging environment deployment

This is very similar to the test environment deployment, with only difference being that the configuration values for the environment and application will be different.

Acceptance tests

Acceptance tests are generally conducted by stakeholders of the application and can be manual or automated. This step is a validation from the customer's point of view regarding the correctness and completeness of an application's functionality.

Deployment to production

Once customers provide their approval, the same steps as those of test and staging environment deployment are executed, with the only difference being that the configuration values for the environment and application are specific to the production environment. Validation is conducted after deployment to ensure that the application is running according to expectations.

Continuous delivery

Continuous delivery and continuous deployment might sound similar to many readers; however, they are not the same. While continuous deployment talks about deployment to multiple environments and finally to a production environment through automation. Continuous delivery is the ability to generate application packages in a way that they are readily deployable in any environment. To generate artifacts that are readily deployable, continuous integration should be used to generate the application artifacts. A new or existing environment should be used to deploy these artifacts, conduct functional tests, performance tests, and user acceptance tests, through automation. Once these activities are successfully executed with no errors, the application package is referred to as readily deployable. It helps get feedback faster from both operations and the end user. This feedback can then be implemented in subsequent iterations.

Continuous learning

With all the previously mentioned DevOps practices, it is possible to create stable, robust, reliable, performant business applications and deploy them automatically to a production environment. However, the benefits of DevOps will not last for long if a continuous improvement and feedback principle is not in place. It is of utmost important that real-time feedback about the application's behavior is passed on as feedback to the development team from both end users and the operations team.

Feedback should be passed to the teams, providing relevant information about what is going well and, importantly, what is not going well.

Applications should be built with monitoring, auditing, and telemetry in mind. The architecture and design should support these. The operations team should collect telemetry information from the production environment, capture any bugs and issues, and pass this information on to the development team such that they can be fixed in subsequent releases. This process is shown in Figure 5.

Continuous learning helps make the application robust and resilient to failures. It also helps make sure that the application is meeting consumer requirements:

Continuous learning

Figure 5: Sample continuous learning process

Measuring DevOps

Once DevOps practices and principles are implemented, the next step is to find out whether these DevOps practices and principles are providing any tangible benefits to the organization. To find the impact of DevOps on delivering changes to customers, appropriate monitoring, audit, and collection of metrics should be developed and deployed. This telemetry should be measured on an ongoing basis. Also, there should be regular baselining of data for effective comparisons in future. After implementing DevOps, the metrics should be captured over a period of time and then compared with the baseline. This comparison of data should uncover intelligence about effectiveness of DevOps in the organization and appropriate corrective measures should be undertaken.

Some of the important metrics that should be tracked are as follows:

Metrics

Impact

Number of deployments

If the number of deployments is higher prior to DevOps implementation, it means that continuous integration, continuous delivery, and deployments favor the overall delivery to production.

Number of daily code check-ins/pushes

If this number is comparatively high, it denotes that developers are taking advantage of continuous integration and the possibilities for code conflict and staleness are reduced.

Number of releases in a month

A higher number is testimony the fact that there is higher confidence in delivering changes to production and that DevOps is helping to do that.

Number of defects/bugs/issues on production

This number should be lower than pre-DevOps implementation numbers. However, if this number is considerable, it reflects that testing is not comprehensive within continuous integration and the continuous delivery pipeline and needs to be further strengthened. Quality of delivery is also low.

Number of failures in continuous integration

This is also known as broken build. This indicates that developers are writing improper code.

Number of failures in the release pipeline / continuous deployment

If the number is high, it indicates that code is not meeting feature requirements. Also, automation of environment provisioning might have issues.

Code coverage percentage

If this number is less, it indicates that unit tests do not cover all scenarios comprehensively. It could also mean that there are code smells with higher cyclomatic complexity.

Summary

In this chapter, we have looked at some of the problems plaguing software organizations with regard to delivery of services to its end users. We covered the definition of DevOps and how DevOps helps eliminate these problems. We also went through the principles and practices of DevOps, briefly explaining their purpose and usefulness. This chapter forms the foundation and backbone for the remaining chapters. Later chapters in the book will be a step-by-step realization of these principles and tenets. Although this chapter was heavy on theory, subsequent chapters will start delving into technology and practical steps to implement DevOps. You should by now have a good grasp of DevOps concepts. In the following chapter, we will cover automation tools, languages, and technologies that will help in implementing DevOps principles in practice.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • This practical learning guide will improve your application lifecycle management and help you manage environments efficiently
  • Showcase through a sample application ways to apply DevOps principles and practices in the real world
  • Implement DevOps using latest technologies in Windows Server 2016 such as Windows Container, Docker, and Nano Servers

Description

Delivering applications swiftly is one of the major challenges faced in fast-paced business environments. Windows Server 2016 DevOps is the solution to these challenges as it helps organizations to respond faster in order to handle the competitive pressures by replacing error-prone manual tasks using automation. This book is a practical description and implementation of DevOps principles and practices using the features provided by Windows Server 2016 and VSTS vNext. It jumps straight into explaining the relevant tools and technologies needed to implement DevOps principles and practices. It implements all major DevOps practices and principles and takes readers through it from envisioning a project up to operations and further. It uses the latest and upcoming concepts and technologies from Microsoft and open source such as Docker, Windows Container, Nano Server, DSC, Pester, and VSTS vNext. By the end of this book, you will be well aware of the DevOps principles and practices and will have implemented all these principles practically for a sample application using the latest technologies on the Microsoft platform. You will be ready to start implementing DevOps within your project/engagement.

Who is this book for?

This book is for .NET developers and system administrators who have a basic knowledge of Windows Server 2016 and are now eager to implement DevOps at work using Windows Server 2016. Knowledge of Powershell, Azure, and containers will help.

What you will learn

  • Take a deep dive into the fundamentals, principles, and practices of DevOps
  • Achieve an end-to-end DevOps implementation
  • Execute source control management using GITHUB and VSTS vNext
  • Automate the provisioning and configuration of infrastructure
  • Build and release pipeline
  • Measure the success of DevOps through application instrumentation and monitoring

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Mar 24, 2017
Length: 558 pages
Edition : 1st
Language : English
ISBN-13 : 9781786463340
Vendor :
Microsoft
Languages :
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Mar 24, 2017
Length: 558 pages
Edition : 1st
Language : English
ISBN-13 : 9781786463340
Vendor :
Microsoft
Languages :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
AU$24.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
AU$249.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts
AU$349.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total AU$ 272.97
Windows Server 2016 Cookbook
AU$90.99
DevOps with Windows Server 2016
AU$90.99
Mastering Windows Server 2016
AU$90.99
Total AU$ 272.97 Stars icon
Banner background image

Table of Contents

11 Chapters
1. Introducing DevOps Chevron down icon Chevron up icon
2. DevOps Tools and Technologies Chevron down icon Chevron up icon
3. DevOps Automation Primer Chevron down icon Chevron up icon
4. Nano, Containers, and Docker Primer Chevron down icon Chevron up icon
5. Building a Sample Application Chevron down icon Chevron up icon
6. Source Code Control Chevron down icon Chevron up icon
7. Configuration Management Chevron down icon Chevron up icon
8. Configuration Management and Operational Validation Chevron down icon Chevron up icon
9. Continuous Integration Chevron down icon Chevron up icon
10. Continuous Delivery and Deployment Chevron down icon Chevron up icon
11. Monitoring and Measuring Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(1 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
RAJIV LODHA Nov 19, 2017
Full star icon Full star icon Full star icon Full star icon Full star icon 5
excellent
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.