Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Containers for Developers Handbook
Containers for Developers Handbook

Containers for Developers Handbook: A practical guide to developing and delivering applications using software containers

Arrow left icon
Profile Icon Francisco Javier Ramírez Urea
Arrow right icon
€33.99
Full star icon Full star icon Full star icon Full star icon Full star icon 5 (5 Ratings)
Paperback Nov 2023 490 pages 1st Edition
eBook
€17.99 €26.99
Paperback
€33.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Francisco Javier Ramírez Urea
Arrow right icon
€33.99
Full star icon Full star icon Full star icon Full star icon Full star icon 5 (5 Ratings)
Paperback Nov 2023 490 pages 1st Edition
eBook
€17.99 €26.99
Paperback
€33.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€17.99 €26.99
Paperback
€33.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Colour book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Containers for Developers Handbook

Modern Infrastructure and Applications with Docker

Software engineering and development is always evolving and introducing new technologies in its architectures and workflows. Software containers appeared more than a decade ago, becoming particularly popular over the last five years thanks to Docker, which made the concept mainstream. Currently, every enterprise manages its container-based application infrastructure in production in both the cloud and on-premises distributed infrastructures. This book will teach you how to increase your development productivity using software containers so that you can create, test, share, and run your applications. You will use a container-based workflow and your final application artifact will be a Docker image-based deployment, ready to run in production environments.

This chapter will introduce software containers in the context of the current software development culture, which needs faster software supply chains made of moving, distributed pieces. We will review how containers work and how they fit into modern application architectures based on distributed components with very specific functionalities (microservices). This allows developers to choose the best language for each application component and distribute the total application load. We will learn about the kernel features that make software containers possible and learn how to create, share, and run application components as software containers. At the end of this chapter, we will learn about the different tools that can help us work with software containers and provide specific use cases for your laptop, desktop computer, and servers.

In this chapter, we will cover the following topics:

  • Evolution of application architecture, from monoliths to distributed microservice architectures
  • Developing microservice-based applications
  • How containers fit in the microservices model
  • Understanding the main concepts, features, and components of software containers
  • Comparing virtualization and containers
  • Building, sharing, and running containers
  • Explaining Windows containers
  • Improving security using software containers

Technical requirements

This book will teach you how to use software containers to improve your application development. We will use open source tools for building, sharing, and running containers, along with a few commercial ones that don’t require licensing for non-professional use. Also included in this book are some labs to help you practically understand the content that we’ll work through. These labs can be found at https://github.com/PacktPublishing/Containers-for-Developers-Handbook/tree/main/Chapter1. The Code In Action video for this chapter can be found at https://packt.link/JdOIY.

From monoliths to distributed microservice architectures

Application architectures are continuously evolving due to technological improvements. Throughout the history of computation, every time a technical gap is resolved in hardware and software engineering, software architects rethink how applications can be improved to take advantage of the new developments. For example, network speed increases made distributing application components across different servers possible, and nowadays, it’s not even a problem to distribute these components across data centers in multiple countries.

To take a quick look at how computers were adopted by enterprises, we must go back in time to the old mainframe days (before the 1990s). This can be considered the base for what we call unitary architecture – one big computer with all the processing functionality, accessed by users through terminals. Following this, the client-server model became very popular as technology also advanced on the user side. Server technologies improved while clients gained more and more functionality, freeing up the server load for publishing applications. We consider both models as monoliths as all application components run on one server; even if the databases are decoupled from the rest of the components, running all important components in a dedicated server is still considered monolithic. Both of these models were very difficult to upgrade when performance started to drop. In these cases, newer hardware with higher specifications was always required. These models also suffer from availability issues, meaning that any maintenance tasks required on either the server or application layer will probably lead to service outages, which affects the normal system uptime.

Exploring monolithic applications

Monolithic applications are those in which all functionalities are provided by just one component, or a set of them so tightly integrated that they cannot be decoupled from one another. This makes them hard to maintain. They weren’t designed with reusability or modularity in mind, meaning that every time developers need to fix an issue, add some new functionality, or change an application’s behavior, the entire application is affected due to, for example, having to recompile the whole application’s code.

Providing high availability to monolithic applications required duplicated hardware, quorum resources, and continuous visibility between application nodes. This may not have changed too much today but we have many other resources for providing high availability. As applications grew in complexity and gained responsibility for many tasks and functionalities, we started to decouple them into a few smaller components (with specific functions such as the web server, database, and more), although core components were kept immutable. Running all application components together on the same server was better than distributing them into smaller pieces because network communication speeds weren’t high enough. Local filesystems were usually used for sharing information between application processes. These applications were difficult to scale (more hardware resources were required, usually leading to acquiring newer servers) and difficult to upgrade (testing, staging, and certification environments before production require the same hardware or at least compatible ones). In fact, some applications could run only on specific hardware and operating system versions, and developers needed workstations or servers with the same hardware or operating system to be able to develop fixes or new functionality for these applications.

Now that we know how applications were designed in the early days, let’s introduce virtualization in data centers.

Virtual machines

The concept of virtualization – providing a set of physical hardware resources for specific purposes – was already present in the mainframe days before the 1990s, but in those days, it was closer to the definition of time-sharing at the compute level. The concept we commonly associate with virtualization comes from the introduction of the hypervisor and the new technology introduced in the late 1990s that allowed for the creation of complete virtual servers running their own virtualized operating systems. This hypervisor software component was able to virtualize and share host resources in virtualized guest operating systems. In the 1990s, the adoption of Microsoft Windows and the emergence of Linux as a server operating system in the enterprise world established x86 servers as the industry standard, and virtualization helped the growth of both of these in our data centers, improving hardware usage and server upgrades. The virtualization layer simplified virtual hardware upgrades when applications required more memory or CPU and also improved the process of providing services with high availability. Data centers became smaller as newer servers could run dozens of virtual servers, and as physical servers’ hardware capabilities increased, the number of virtualized servers per node increased.

In the late 1990s, the servers became services. This means that companies started to think about the services they provided instead of the way they did it. Cloud providers arrived to provide services to small businesses that didn’t want to acquire and maintain their own data centers. Thus, a new architecture model was created, which became pretty popular: the cloud computing infrastructure model. Amazon launched Amazon Web Services (AWS), providing storage, computation, databases, and other infrastructure resources. And pretty soon after that, Elastic Compute Cloud entered the arena of virtualization, allowing you to run your own servers with a few clicks. Cloud providers also allowed users to use their well-documented application programming interfaces (APIs) for automation, and the concept of Infrastructure as Code (IaC) was introduced. We were able to create our virtualization instances using programmatic and reusable code. This model also changed the service/hardware relationship and what started as a good idea at first – using cloud platforms for every enterprise service – became a problem for big enterprises, which saw increased costs pretty quickly based on network bandwidth usage and as a result of not sufficiently controlling their use of cloud resources. Controlling cloud service costs soon became a priority for many enterprises, and many open source projects started with the premise of providing cloud-like infrastructures. Infrastructure elasticity and easy provisioning are the keys to these projects. OpenStack was the first one, distributed in smaller projects, each one focused on different functionalities (storage, networking, compute, provisioning, and so on). The idea of having on-premises cloud infrastructure led software and infrastructure vendors into new alliances with each other, in the end providing new technologies for data centers with the required flexibility and resource distribution. They also provided APIs for quickly deploying and managing provisioned infrastructure, and nowadays, we can provision either cloud infrastructure resources or resources on our data centers using the same code with few changes.

Now that we have a good idea of how server infrastructures work today, let’s go back to applications.

Three-tier architecture

Even with these decoupled infrastructures, applications can still be monoliths if we don’t prepare them for separation into different components. Elastic infrastructures allow us to distribute resources and it would be nice to have distributed components. Network communications are essential and technological evolution has increased speeds, allowing us to consume network-provided services as if they were local and facilitating the use of distributed components.

Three-tier architecture is a software application architecture where the application is decoupled into three to five logical and physical computing layers. We have the presentation tier, or user interface; the application tier, or backend, where data is processed; and the data tier, where the data for use in the application is stored and managed, such as in a database. This model was used even before virtualization arrived on the scene, but you can imagine the improvement of being able to distribute application components across different virtual servers instead of increasing the number of servers in your data center.

Just to recap before continuing our journey: the evolution of infrastructure and network communications has allowed us to run component-distributed applications, but we just have a few components per application in the three-tier model. Note that in this model, different roles are involved in application maintenance as different software technologies are usually employed. For example, we need database administrators, middleware administrators, and infrastructure administrators for systems and network communications. In this model, although we are still forced to use servers (virtual or physical), application component maintenance, scalability, and availability are significantly improved. We can manage each component in isolation, executing different maintenance tasks and fixes and adding new functionalities decoupled from the application core. In this model, developers can focus on either frontend or backend components. Some coding languages are specialized for each layer – for example, JavaScript was the language of choice for frontend developers (although it evolved for backend services too).

As Linux systems grew in popularity in the late 1990s, applications were distributed into different components, and eventually different applications working together and running in different operating systems became a new requirement. Shared files, provided by network filesystems using either network-attached storage (NAS) or more complex storage area network (SAN) storage backends were used at first, but Simple Object Access Protocol (SOAP) and other queueing message technologies helped applications to distribute data between components and manage their information without filesystem interactions. This helped decouple applications into more and more distributed components running on top of different operating systems.

Microservices architecture

The microservices architecture model goes a step further, decoupling applications into smaller pieces with enough functionality to be considered components. This model allows us to manage a completely independent component life cycle, freeing us to choose whatever coding language fits best with the functionality in question. Application components are kept light in terms of functionality and content, which should lead to them using fewer host resources and responding faster to start and stop commands. Faster restarts are key to resilience and help us maintain our applications while up, with fewer outages. Application health should not depend on component-external infrastructure; we should improve components’ logic and resilience so that they can start and stop as fast as possible. This means that we can ensure that changes to an application are applied quickly, and in the case of failure, the required processes will be up and running in seconds. This also helps in managing the application components’ life cycle as we can upgrade components very fast and prepare circuit breakers to manage stopped dependencies.

Microservices use the stateless paradigm; therefore, application components should be stateless. This means that a microservice’s state must be abstracted from its logic or execution. This is key to being able to run multiple replicas of an application component, allowing us to run them distributed on different nodes from a pool.

This model also introduced the concept of run everywhere, where an application should be able to run its components on either cloud or on-premise infrastructures, or even a mix of both (for example, the presentation layer for components could run on cloud infrastructure while the data resides in our data center).

Microservices architecture provides the following helpful features:

  • Applications are decoupled into different smaller pieces that provide different features or functionalities; thus, we can change any of them at any time without impacting the whole application.
  • Decoupling applications into smaller pieces lets developers focus on specific functionalities and allows them to use the most appropriate programming language for each component.
  • Interaction between application components is usually provided via Representational State Transfer (REST) API calls using HTTP. RESTful systems aim for fast performance and reliability and can scale without any problem.
  • Developers describe which methods, actions, and data they provide in their microservice, which are then consumed by other developers or users. Software architects must standardize how application components talk with each other and how microservices are consumed.
  • Distributing application components across different nodes allows us to group microservices into nodes for the best performance, closer to data sources and with better security. We can create nodes with different features to provide the best fit for our application components.

Now that we’ve learned what microservices architecture is, let’s take a look at its impact on the development process.

Developing distributed applications

Monolith applications, as we saw in the previous section, are applications in which all functionalities run together. Most of these applications were created for specific hardware, operating systems, libraries, binary versions, and so on. To run these applications in production, you need a least one dedicated server with the right hardware, operating system, libraries, and so on, and developers require a similar node architecture and resources even just for fixing possible application issues. Adding to this, the pre-production environments for tasks such as certification and testing will multiply the number of servers significantly. Even if your enterprise had the budget for all these servers, any maintenance task as a result of any upgrade in any operating system-related component in production should always be replicated on all other environments. Automation helps in replicating changes between environments, but this is not easy. You have to replicate environments and maintain them. On the other hand, new node provisioning could have taken months in the old days (preparing the specifications for a new node, drawing up the budget, submitting it to your company’s approvals workflow, looking for a hardware provider, and so on). Virtualization helped system administrators provision new nodes for developers faster, and automation (provided by tools such as Chef, Puppet, and, my favorite, Ansible) allowed for the alignment of changes between all environments. Therefore, developers were able to obtain their development environments quickly and ensure they were using an aligned version of system resources, improving the process of application maintenance.

Virtualization also worked very well with the three-tier application architecture. It was easy to run application components for developers in need of a database server to connect to while coding new changes. The problem with virtualization comes from the concept of replicating a complete operating system with server application components when we only need the software part. A lot of hardware resources are consumed for the operating system alone, and restarting these nodes takes some time as they are a complete operating system running on top of a hypervisor, itself running on a physical server with its own operating system.

Anyhow, developers were hampered by outdated operating system releases and packages, making it difficult for them to enable the evolution of their applications. System administrators started to manage hundreds of virtual hosts and even with automation, they weren’t able to maintain operating systems and application life cycles in alignment. Provisioning virtual machines on cloud providers using their Infrastructure-as-a-Service (IaaS) platforms or using their Platform-as-a-Service (PaaS) environments and scripting the infrastructure using their APIs (IaC) helped but the problem wasn’t fully resolved due to the quickly growing number of applications and required changes. The application life cycle changed from one or two updates per year to dozens per day.

Developers started to use cloud-provided services and using scripts and applications quickly became more important than the infrastructure on which they were running, which today seems completely normal and logical. Faster network communications and distributed reliability made it easier to start deploying our applications anywhere, and data centers became smaller. We can say that developers started this movement and it became so popular that we finished decoupling application components from the underlying operating systems.

Software containers are the evolution of process isolation features that were learned throughout the development of computer history. Mainframe computers allowed us to share CPU time and memory resources many years ago. Chroot and jail environments were common ways of sharing operating system resources with users, who were able to use all the binaries and libraries prepared for them by system administrators in BSD operating systems. On Solaris systems, we had zones as resource containers, which acted as completely isolated virtual servers within a single operating system instance.

So, why don’t we just isolate processes instead of full operating systems? This is the main idea behind containers. Containers use kernel features to provide process isolation at the operating system level, and all processes run on the same host but are isolated from each other. So, every process has its own set of resources sharing the same host kernel.

Linux kernels have featured this design of process grouping since the late 2000s in the form of control groups (cgroups). This feature allows the Linux kernel to manage, restrict, and audit groups of processes.

Another very important Linux kernel feature that’s used with containers is kernel namespaces, which allow Linux to run processes wrapped with their process hierarchy, along with their own network interfaces, users, filesystem mounts, and inter-process communication. Using kernel namespaces and control groups, we can completely isolate a process within an operating system. It will run as if it were on its own, using its own operating system and limited CPU and memory (we can even limit its disk I/O).

The Linux Containers (LXC) project took this idea further and created the first working implementation of it. This project is still available, is still in progress, and was the key to what we now know as Docker containers. LXC introduced terms such as templates to describe the creation of encapsulated processes using kernel namespaces.

Docker containers took all these concepts and created Docker Inc., an open source project that made it easy to run software containers on our systems. Containers ushered in a great revolution, just as virtualization did more than 20 years ago.

Going back to microservices architecture, the ideal application decoupling would mean running defined and specific application functionalities as completely standalone and isolated processes. This led to the idea of running microservice applications’ components within containers, with minimum operating system overhead.

What are containers?

We can define a container as a process with all its requirements isolated using cgroups and namespace kernel features. A process is the way we execute a task within the operating system. If we define a program as the set of instructions developed using a programming language, included in an executable format on disk, we can say that a process is a program in action.

The execution of a process involves the use of some system resources, such as CPU and memory, and although it runs on its own environment, it can use the same information as other processes sharing the same host system.

Operating systems provide tools for manipulating the behavior of processes during execution, allowing system administrators to prioritize the critical ones. Each process running on a system is uniquely identified by a Process Identifier (PID). A parent-child relationship between processes is developed when one process executes a new process (or creates a new thread) during its execution. The new process (or sub-process) that’s created will have as its parent the previous one, and so on. The operating system stores information about process relations using PIDs and parent PIDs. Processes may inherit a parent hierarchy from the user who runs them, so users own and manage their own processes. Only administrators and privileged users can interact with other users’ processes. This behavior also applies to child processes created by our executions.

Each process runs on its own environment and we can manipulate its behavior using operating system features. Processes can access files as needed and use pointers to descriptors during execution to manage these filesystem resources.

The operating system kernel manages all processes, scheduling them on its physical or virtualized CPUs, giving them appropriate CPU time, and providing them with memory or network resources (among others).

These definitions are common to all modern operating systems and are key for understanding software containers, which we will discuss in detail in the next section.

Understanding the main concepts of containers

We have learned that as opposed to virtualization, containers are processes running in isolation and sharing the host operating system kernel. In this section, we will review the components that make containers possible.

Kernel process isolation

We already introduced kernel process namespace isolation as a key feature for running software containers. Operating system kernels provide namespace-based isolation. This feature has been present in Linux kernels since 2006 and provides different layers of isolation associated with the properties or attributes a process has when it runs on a host. When we apply these namespaces to processes, they will run their own set of properties and will not see the other processes running alongside them. Hence, kernel resources are partitioned such that each set of processes sees different sets of resources. Resources may exist in multiple spaces and processes may share them.

Containers, as they are host processes, run with their own associated set of kernel namespaces, such as the following:

  • Processes: The container’s main process is the parent of others within the container. All these processes share the same process namespace.
  • Network: Each container receives a network stack with unique interfaces and IP addresses. Processes (or containers) sharing the same network namespace will get the same IP address. Communications between containers pass through host bridge interfaces.
  • Users: Users within containers are unique; therefore, each container gets its own set of users, but these users are mapped to real host user identifiers.
  • Inter-process communication (IPC): Each container receives its own set of shared memory, semaphores, and message queues so that it doesn’t conflict with other processes on the host.
  • Mounts: Each container mounts a root filesystem; we can also attach remote and host local mounts.
  • Unix time-sharing (UTS): Each container is assigned a hostname and the time is synced with the underlying host.

Processes running inside a container sharing the same process kernel namespace will receive PIDs as if they were running alone inside their own kernel. The container’s main process is assigned PID 1 and other sub-processes or threads will get subsequent IDs, inheriting the main process hierarchy. The container will die if the main process dies (or is stopped).

The following diagram shows how our system manages container PIDs inside the container’s PID namespace (represented by the gray box) and outside, at the host level:

Figure 1.1 – Schema showing a hierarchy of PIDs when you execute an NGINX web server with four worker processes

Figure 1.1 – Schema showing a hierarchy of PIDs when you execute an NGINX web server with four worker processes

In the preceding figure, the main process running inside a container is assigned PID 1, while the other processes are its children. The host runs its own PID 1 process and all other processes run in association with this initial process.

Control groups

A cgroup is a feature provided by the Linux kernel that enables us to limit and isolate the host resources associated with processes (such as CPU, memory, and disk I/O). This provides the following features:

  • Resource limits: Host resources are limited by using a cgroup and thus, the number of resources that a process can use, including CPU or memory
  • Prioritization: If resource contention is observed, the amount of host resources (CPU, disk, or network) that a process can use compared to processes in another cgroup can be controlled
  • Accounting: Cgroups monitor and report resource limits usage at the cgroup level
  • Control: We can manage the status of all processes in a cgroup

The isolation of cgroups will not allow containers to bring down a host by exhausting its resources. An interesting fact is that you can use cgroups without software containers just by mounting a cgroup (cgroup type system), adjusting the CPU limits of this group, and finally adding a set of PIDs to this group. This procedure will apply to either cgroups-V1 or the newer cgroups-V2.

Container runtime

A container runtime, or container engine, is a piece of software that runs containers on a host. It is responsible for downloading container images from a registry to create containers, monitoring the resources available in the host to run the images, and managing the isolation layers provided by the operating system. The container runtime also reviews the current status of containers and manages their life cycle, starting again when their main process dies (if we declare them to be available whenever this happens).

We generally group container runtimes into low-level runtimes and high-level runtimes.

Low-level runtimes are those simple runtimes focused only on software container execution. We can consider runC and crun in this group. Created by Docker and the Open Container Initiative (OCI), runC is still the de facto standard. Red Hat created crun, which is faster than runC with a lower memory footprint. These low-level runtimes do not require container images to run – we can use a configuration file and a folder with our application and all its required files (which is the content of a Docker image, but without any metadata information). This folder usually contains a file structure resembling a Linux root filesystem, which, as we mentioned before, is everything required by an application (or component) to work. Imagine that we execute the ldd command on our binaries and libraries and iterate this process with all its dependencies, and so on. We will get a complete list of all the files strictly required for the process and this would become the smallest image for the application.

High-level container runtimes usually implement the Container Runtime Interface (CRI) specification of the OCI. This was created to make container orchestration more runtime-agnostic. In this group, we have Docker, CRI-O, and Windows/Hyper-V containers.

The CRI interface defines the rules so that we can integrate our container runtimes into container orchestrators, such as Kubernetes. Container runtimes should have the following characteristics:

  • Be capable of starting/stopping pods
  • Deal with all containers (start, pause, stop, and delete them)
  • Manage container images
  • Provide metrics collection and access to container logs

The Docker container runtime became mainstream in 2016, making the execution of containers very easy for users. CRI-O was created explicitly for the Kubernetes orchestrator by Red Hat to allow the execution of containers using any OCI-compliant low-level runtime. High-level runtimes provide tools for interacting with them, and that’s why most people choose them.

A middle ground between low-level and high-level container runtimes is provided by Containerd, which is an industry-standard container runtime. It runs on Linux and Windows and can manage the complete container life cycle.

The technology behind runtimes is evolving very fast; we can even improve the interaction between containers and hosts using sandboxes (gVisor from Google) and virtualized runtimes (Kata Containers). The former increases containers’ isolation by not sharing the host’s kernel with them. A specific kernel (the small unikernel with restricted capabilities) is provided to containers as a proxy to the real kernel. Virtualized runtimes, on the other hand, use virtualization technology to isolate a container within a very small virtual machine. Although both cases add some load to the underlying operating system, security is increased as containers don’t interact directly with the host’s kernel.

Container runtimes only review the main process execution. If any other process running inside a container dies and the main process isn’t affected, the container will continue running.

Kernel capabilities

Starting with Linux kernel release 2.2, the operating system divides process privileges into distinct units, known as capabilities. These capabilities can be enabled or disabled by operating system and system administrators.

Previously, we learned that containers run processes in isolation using the host’s kernel. However, it is important to know that only a restricted set of these kernel capabilities are allowed inside containers unless they are explicitly declared. Therefore, containers improve their processes’ security at the host level because those processes can’t do anything they want. The capabilities that are currently available inside a container running on top of the Docker container runtime are SETPCAP, MKNOD, AUDIT_WRITE, CHOWN, NET_RAW, DAC_OVERRIDE, FOWNER, FSETID, KILL, SETGID, SETUID, NET_BIND_SERVICE, SYS_CHROOT, and SETFCAP.

This set of capabilities allows, for example, processes inside a container to attach and listen on ports below 1024 (the NET_BIND_SERVICE capability) or use ICMP (the NET_RAW capability).

If our process inside a container requires us to, for example, create a new network interface (perhaps to run a containerized OpenVPN server), the NET_ADMIN capability should be included.

Important note

Container runtimes allow containers to run with full privileges using special parameters. The processes within these containers will run with all kernel capabilities and it could be very dangerous. You should avoid using privileged containers – it is best to take some time to verify which capabilities are needed by an application to work correctly.

Container orchestrators

Now that we know that we need a runtime to execute containers, we must also understand that this will work in a standalone environment, without hardware high availability. This means that server maintenance, operating system upgrades, and any other problem at the software, operating system, or hardware levels may affect your application.

High availability requires resource duplicity and thus more servers and/or hardware. These resources will allow containers to run on multiple hosts, each one with a container runtime. However, maintaining application availability in this situation isn’t easy. We need to ensure that containers will be able to run on any of these nodes; in the Overlay filesystems section, we’ll learn that synchronizing container-related resources within nodes involves more than just copying a few files. Container orchestrators manage node resources and provide them to containers. They schedule containers as needed, take care of their status, provide resources for persistence, and manage internal and external communications (in Chapter 6, Fundamentals of Orchestration, we will learn how some orchestrators delegate some of these features to different modules to optimize their work).

The most famous and widely used container orchestrator today is Kubernetes. It has a lot of great features to help manage clustered containers, although the learning curve can be tough. Also, Docker Swarm is quite simple and allows you to quickly execute your applications with high availability (or resilience). We will cover both in detail in Chapter 7, Orchestrating with Swarm, and Chapter 8, Deploying Applications with the Kubernetes Orchestrator. There were other opponents in this race but they stayed by the wayside while Kubernetes took the lead.

HashiCorp’s Nomad and Apache’s Mesos are still being used for very special projects but are out of scope for most enterprises and users. Kubernetes and Docker Swarm are community projects and some vendors even include them within their enterprise-ready solutions. Red Hat’s OpenShift, SUSE’s Rancher, Mirantis’ Kubernetes Engine (old Docker Enterprise platform), and VMware’s Tanzu, among others, all provide on-premises and some cloud-prepared custom Kubernetes platforms. But those who made Kubernetes the most-used platform were the well-known cloud providers – Google, Amazon, Azure, and Alibaba, among others, serve their own container orchestration tools, such as Amazon’s Elastic Container Service or Fargate, Google’s Cloud Run, and Microsoft’s Azure Container Instances, and they also package and manage their own Kubernetes infrastructures for us to use (Google’s GKE, Amazon’s EKS, Microsoft’s AKS, and so on). They provide Kubernetes-as-a-Service platforms where you only need an account to start deploying your applications. They also serve you storage, advanced networking tools, resources for publishing your applications, and even follow-the-sun or worldwide distributed architectures.

There are many Kubernetes implementations. The most popular is probably OpenShift or its open source project, OKD. There are others based on a binary that launches and creates all of the Kubernetes components using automated procedures, such as Rancher RKE (or its government-prepared release, RKE2), and those featuring only the strictly necessary Kubernetes components, such as K3S or K0S, to provide the lightest platform for IoT and more modest hardware. And finally, we have some Kubernetes distributions for desktop computers, offering all the features of Kubernetes ready to develop and test applications with. In this group, we have Docker Desktop, Rancher Desktop, Minikube, and Kubernetes in Docker (KinD). We will learn how to use them in this book to develop, package, and prepare applications for production.

We shouldn’t forget solutions for running orchestrated applications based on multiple containers on standalone servers or desktop computers, such as Docker Compose. Docker has prepared a simple Python-based orchestrator for quick application development, managing the container dependencies for us. It is very convenient for testing all of our components together on a laptop with minimum overhead, instead of running a full Kubernetes or Swarm cluster. We will cover this tool, seeing as it has evolved a lot and is now part of the common Docker client command line, in Chapter 5, Creating Multi-Container Applications.

Container images

Earlier in this chapter, we mentioned that containers run thanks to container images, which are used as templates for executing processes in isolation and attached to a filesystem; therefore, a container image contains all the files (binaries, libraries, configurations, and so on) required by its processes. These files can be a subset of some operating system or just a few binaries with configurations built by yourself.

Virtual machine templates are immutable, as are container templates. This immutability means that they don’t change between executions. This feature is key because it ensures that we get the same results every time we use an image for creating a container. Container behavior can be changed using configurations or command-line arguments through the container runtime. This ensures that images created by developers will work in production as expected, and moving applications to production (or even creating upgrades between different releases) will be smooth and fast, reducing the time to market.

Container images are a collection of files distributed in layers. We shouldn’t add anything more than the files required by the application. As images are immutable, all these layers will be presented to containerized processes as read-only sets of files. But we don’t duplicate files between layers. Only files modified on one layer will be stored in the next layer above – this way, each layer retains the changes from the original base layer (referenced as the base image).

The following diagram shows how we create a container image using multiple layers:

Figure 1.2 – Schema of stacked layers representing a container image

Figure 1.2 – Schema of stacked layers representing a container image

A base layer is always included, although it could be empty. The layers above this base layer may include new binaries or just include new meta-information (which does not create a layer but a meta-information modification).

To easily share these templates between computers or even environments, these file layers are packaged into .tar files, which are finally what we call images. These packages contain all layered files, along with meta-information that describes the content, specifies the process to be executed, identifies the ports that will be exposed to communicate with other containerized processes, specifies the user who will own it, indicates the directories that will be kept out of container life cycle, and so on.

We use different methods to create these images, but we aim to make the process reproducible, and thus we use Dockerfiles as recipes. In Chapter 2, Building Container Images, we will learn about the image creation workflow while utilizing best practices and diving deep into command-line options.

These container images are stored on registries. This application software is intended to store file layers and meta-information in a centralized location, making it easy to share common layers between different images. This means that two images using a common Debian base image (a subset of files from the complete operating system) will share these base files, thus optimizing disk space usage. This can also be employed on containers’ underlying host local filesystems, saving a lot of space.

Another result of the use of these layers is that containers using the same template image to execute their processes will use the same set of files, and only those files that get modified will be stored.

All these behaviors related to the optimized use of files shared between different images and containers are provided by operating systems thanks to overlay filesystems.

Overlay filesystems

An overlay filesystem is a union mount filesystem (a way of combining multiple directories into one that appears to contain their whole combined content) that combines multiple underlying mount points. This results in a structure with a single directory that contains all underlying files and sub-directories from all sources.

Overlay filesystems merge content from directories, combining the file objects (if any) yielded by different processes, with the upper filesystem taking precedence. This is the magic behind container-image layers’ reusability and disk space saving.

Now that we understand how images are packaged and how they share content, let’s go back to learning a bit more about containers. As you may have learned in this section, containers are processes that run in isolation on top of a host operating system thanks to a container runtime. Although the kernel host is shared by multiple containers, features such as kernel namespaces and cgroups provide special containment layers that allow us to isolate them. Container processes need some files to work, which are included in the container space as immutable templates. As you may think, these processes will probably need to modify or create some new files found on container image layers, and a new read-write layer will be used to store these changes. The container runtime presents this new layer to the container to enable changes – we usually refer to this as the container layer.

The following schema outlines the read-write layers coming from the container image template with the newly added container layer, where the container’s running processes store their file modifications:

Figure 1.3 – Container image layers will always be read-only; the container adds a new layer with read-write capabilities

Figure 1.3 – Container image layers will always be read-only; the container adds a new layer with read-write capabilities

The changes made by container processes are always ephemeral as the container layer will be lost whenever we remove the container, while image layers are immutable and will remain unchanged. With this behavior in mind, it is easy to understand that we can run multiple containers using the same container image.

The following figure represents this situation where three different running containers were created from the same image:

Figure 1.4 – Three different containers run using the same container image

Figure 1.4 – Three different containers run using the same container image

As you may have noticed, this behavior leaves a very small footprint on our operating systems in terms of disk space. Container layers are very small (or at least they should be, and you as a developer will learn which files shouldn’t be left inside the container life cycle).

Container runtimes manage how these overlay folders will be included inside containers and the magic behind that. The mechanism for this is based on specific operating system drivers that implement copy-on-write filesystems. Layers are arranged one on top of the other and only files modified within them are merged on the upper layer. This process is managed at speed by operating system drivers, but some small overhead is always expected, so keep in mind that all files that are modified continuously by your application (logs, for example) should never be part of the container.

Important note

Copy-on-write uses small layered filesystems or folders. Files from any layer are accessible to read access, but write requires searching for the file within the underlying layers and copying this file to the upper layer to store the changes. Therefore, the I/O overhead from reading files is very small and we can keep multiple layers for better file distribution between containers. In contrast, writing requires more resources and it would be better to leave big files and those subject to many or continuous modifications out of the container layer.

It is also important to notice that containers are not ephemeral at all. As mentioned previously, changes in the container layer are retained until the container is removed from the operating system; so, if you create a 10 GB file in the container layer, it will reside on your host’s disk. Container orchestrators manage this behavior, but be careful where you store your persistent files. Administrators should do container housekeeping and disk maintenance to avoid disk-pressure problems.

Developers should keep this in mind and prepare their applications using containers to be logically ephemeral and store persistent data outside the container’s layers. We will learn about options for persistence in Chapter 10, Leveraging Application Data Management in Kubernetes.

This thinking leads us to the next section, where we will discuss the intrinsic dynamism of container environments.

Understanding dynamism in container-based applications

We have seen how containers run using immutable storage (container images) and how the container runtime adds a new layer for managing changed files. Although we mentioned in the previous section that containers are not ephemeral in terms of disk usage, we have to include this feature in our application’s design. Containers will start and stop whenever you upgrade your application’s components. Whenever you change the base image, a completely new container will be created (remember the layers ecosystem described in the previous section). This will become even worse if you want to distribute these application components across a cluster – even using the same image will result in different containers being created on different hosts. Thus, this dynamism is inherited in these platforms.

In the context of networking communications inside containers, we know that processes running inside a container share its network namespace, and thus they all get the same network stack and IP address. But every time a new container is created, the container runtime will provide a new IP address. Thanks to container orchestration and the Domain Name System (DNS) included, we can communicate with our containers. As IP addresses are dynamically managed by the container runtime’s internal IP Address Management (IPAM) using defined pools, every time a container dies (whether the main process is stopped, killed manually, or ended by an error), it will free its IP address and IPAM will assign it to a new container that might be part of a completely different application. Hence, we can trust the IP address assignment although we shouldn’t use container IP addresses in our application configurations (or even worse, write them in our code, which is a bad practice in every scenario). IP addresses will be dynamically managed by the IPAM container runtime component by default. We will learn about better mechanisms we can use to reference our application’s containers, such as service names, in Chapter 4, Running Docker Containers.

Applications use fully qualified domain names (or short names if we are using internal domain communications, as we will learn when we use Docker Compose to run multi-container applications, and also when applications run in more complicated container orchestrations).

Because IP addresses are dynamic, special resources should be used to assign sets of IP addresses (or unique IP addresses, if we have just one process replica) to service names. In the same way, publishing application components requires some resource mappings, using network address translation (NAT) for communicating between users and external services and those running inside containers, distributed across a cluster in different servers or even different infrastructures (such as cloud-provided container orchestrators, for example).

Since we’re reviewing the main concepts related to containers in this chapter, we can’t miss out on the tools that are used for creating, executing, and sharing containers.

Tools for managing containers

As we learned previously, the container runtime will manage most of the actions we can achieve with containers. Most of these runtimes run as daemons and provide an interface for interacting with them. Among these tools, Docker stands out as it provides all the tools in a box. Docker acts as a client-server application and in newer releases, both the client and server components are packaged separately, but in any case, both are needed by users. At first, when Docker Engine was the most popular and reliable container engine, Kubernetes adopted it as its runtime. But this marriage did not last long, and Docker Engine was deprecated in Kubernetes release 1.22. This happened because Docker manages its own integration of Containerd, which is not standardized nor directly usable by the Kubernetes CRI. Despite this fact, Docker is still the most widely used option for developing container-based applications and the de facto standard for building images.

We mentioned Docker Desktop and Rancher Desktop earlier in this section. Both act as container runtime clients that use either the docker or nerdctl command lines. We can use such clients because in both cases, dockerd or containerd act as container runtimes.

Developers and the wider community pushed Docker to provide a solution for users who prefer to run containers without having to run a privileged system daemon, which is dockerd’s default behavior. It took some time but finally, a few years ago, Docker published its rootless runtime with user privileges. During this development phase, another container executor arrived, called Podman, created by Red Hat to solve the same problem. This solution can run without root privileges and aims to avoid the use of a daemonized container runtime. The host user can run containers without any system privilege by default; only a few tweaks are required by administrators if the containers are to be run in a security-hardened environment. This made Podman a very secure option for running containers in production (without orchestration). Docker also included rootless containers by the end of 2019, making both options secure by default.

As you learned at the beginning of this section, containers are processes that run on top of an operating system, isolated using its kernel features. It is quite evident why containers are so popular in microservice environments (one container runs a process, which is ultimately a microservice), although we can still build microservice-based applications without containers. It is also possible to use containers to run whole application components together, although this isn’t an ideal situation.

Important note

In this chapter, we’ll largely focus on software containers in the context of Linux operating systems. This is because they were only introduced in Windows systems much later. However, we will also briefly discuss them in the context of Windows.

We shouldn’t compare containers with virtual nodes. As discussed earlier in this section, containers are mainly based on cgroups and kernel namespaces while virtual nodes are based on hypervisor software. This software provides sandboxing capabilities and specific virtualized hardware resources to guest hosts. We still need to prepare operating systems to run these virtual guest hosts. Each guest node will receive a piece of virtualized hardware and we must manage servers’ interactions as if they were physical.

We’ll compare these models side by side in the following section.

Comparing virtualization and containers

The following schema represents a couple of virtual guest nodes running on top of a physical host:

Figure 1.5 – Applications running on top of virtual guest nodes, running on top of a physical server

Figure 1.5 – Applications running on top of virtual guest nodes, running on top of a physical server

A physical server running its own operating system executes a hypervisor software layer to provide virtualization capabilities. A specific amount of hardware resources is virtualized and provisioned to these new virtual guest nodes. We should install new operating systems for these new hosts and after that, we will be able to run applications. Physical host resources are partitioned for guest hosts and both nodes are completely isolated from each other. Each virtual machine executes its own kernel and its operating system runs on top of the host. There is complete isolation between guests’ operating systems because the underlying host’s hypervisor software keeps them separated.

In this model, we require a lot of resources, even if we just need to run a couple of processes per virtual host. Starting and stopping virtual hosts will take time. Lots of non-required software and processes will probably run on our guest host and it will require some tuning to remove them.

As we have learned, the microservices model is based on the idea of applications running decoupled in different processes with complete functionality. Thus, running a complete operating system within just a couple of processes doesn’t seem like a good idea.

Although automation will help us, we need to maintain and configure those guest operating systems in terms of running the required processes and managing users, access rights, and network communications, among other things. System administrators maintain these hosts as if they were physical. Developers require their own copies to develop, test, and certify application components. Scaling up these virtual servers can be a problem because in most cases, increasing resources require a complete reboot to apply the changes.

Modern virtualization software provides API-based management, which enhances their usage and virtual node maintenance, but it is not enough for microservice environments. Elastic environments, where components should be able to scale up or down on demand, will not fit well in virtual machines.

Now, let’s review the following schema, which represents a set of containers running on physical and virtual hosts:

Figure 1.6 – A set of containers running on top of physical and virtual hosts

Figure 1.6 – A set of containers running on top of physical and virtual hosts

All containers in this schema share the same host kernel as they are just processes running on top of an operating system. In this case, we don’t care whether they run on a virtual or a physical host; we expect the same behavior. Instead of hypervisor software, we have a container runtime for running containers. Only a template filesystem and a set of defined resources are required for each container. To clarify, a complete operating system filesystem is not required – we just need the specific files required by our process to work. For example, if a process runs on a Linux kernel and is going to use some network capabilities, then the /etc/hosts and /etc/nsswitch.conf files would probably be required (along with some network libraries and their dependencies). The attack surface will be completely different than having a whole operating system full of binaries, libraries, and running services, regardless of whether the application uses them or not.

Containers are designed to run just one main process (and its threads or sub-processes) and this makes them lightweight. They can start and stop as fast as their main process does.

All the resources consumed by a container are related to the given process, which is great in terms of the allocation of hardware resources. We can calculate our application’s resource consumption by observing the load of all its microservices.

We define images as templates for running containers. These images contain all the files required by the container to work plus some meta-information providing its features, capabilities, and which commands or binaries will be used to start the process. Using images, we can ensure that all the containers created with one template will run the same. This eliminates infrastructure friction and helps developers prepare their applications to run in production. The configuration (and of course security information such as credentials) is the only thing that differs between the development, testing, certification, and production environments.

Software containers also improve application security because they run by default with limited privileges and allow only a set of system calls. They run anywhere; all we need is a container runtime to be able to create, share, and run containers.

Now that we know what containers are and the most important concepts involved, let’s try to understand how they fit into development processes.

Building, sharing, and running containers

Build, ship, and run: you might have heard or read this quote years ago. Docker Inc. used it to promote the ease of using containers. When creating container-based applications, we can use Docker to build container images, share these images within environments, move the content from our development workstations to testing and staging environments, execute them as containers, and finally use these packages in production. Only a few changes are required throughout, mainly at the application’s configuration level. This workflow ensures application usage and immutability between the development, testing, and staging stages. Depending on the container runtime and container orchestrator chosen for each stage, Docker could be present throughout (Docker Engine and Docker Swarm). Either way, most people still use the Docker command line to create container images due to its great, always-evolving features that allow us, for example, to build images for different processor architectures using our desktop computers.

Adding continuous integration (CI) and continuous deployment (CD) (or continuous delivery, depending on the source) to the equation simplifies developers’ lives so they can focus on their application’s architecture and code.

They can code on their workstations and push their code to a source code repository, and this event will trigger a CI/CD automation to build applications artifacts, compiling their code and providing the artifacts in the form of binaries or libraries. This automation can also include these artifacts inside container images. These become the new application artifacts and are stored in image registries (the backends that store container images). Different executions can be chained to test this newly compiled component together with other components in the integration phase, achieve verification via some tests in the testing phase, and so on, passing through different stages until it gets to production. All these chained workflows are based on containers, configuration, and the images used for execution. In this workflow, developers never explicitly create a release image; they only build and test development ones, but the same Dockerfile recipe is used on their workstations and in the CI/CD phases executed on servers. Reproducibility is key.

Developers can run multiple containers on their developer workstations as if they were using the real environment. They can test their code along with other components in their environment, allowing them to evaluate and discover problems faster and fix them even before moving their components to the CI/CD pipelines. When their code is ready, they can push it to their code repository and trigger the automation. Developers can build their development images, test them locally (be it a standalone component, multiple components, or even a full application), prepare their release code, then push it, and the CI/CD orchestrator will build the release image for them.

In these contexts, images are shared between environments via the use of image registries. Shipping images from server to server is easy as the host’s container runtime will download the images from the given registries – but only those layers not already present on the servers will be downloaded, hence the layer distribution within container images is key.

The following schema outlines this simplified workflow:

Figure 1.7 – Simplified schema representing a CI/CD workflow example using software containers to deliver applications to production

Figure 1.7 – Simplified schema representing a CI/CD workflow example using software containers to deliver applications to production

Servers running these different stages can be either standalone servers, pools of nodes from orchestrated clusters, or even more complex dedicated infrastructures, including in some cases cloud-provided hosts or whole clusters. Using container images ensures the artifact’s content and infrastructure-specific configurations will run in the custom application environment in each case.

With this in mind, we can imagine how we could build a full development chain using containers. We talked about Linux kernel namespaces already, so let’s continue by understanding how these isolation mechanisms work on Microsoft Windows.

Explaining Windows containers

During this chapter, we have focused on software containers within Linux operating systems. Software containers started on Linux systems, but due to their importance and advances in technology in terms of host resource usage, Microsoft introduced them in the Microsoft Windows Server 2016 operating system. Before this, Windows users and administrators were only capable of using software containers for Linux through virtualization. Thus, there was the Docker Toolbox solution, of which Docker Desktop formed a part, and installing this software on our Windows-based computer allowed us to have a terminal with the Docker command line, a fancy GUI, and a Hyper-V Linux virtual machine where containers would run. This made it easy for entry-level users to use software containers on their Windows desktops, but Microsoft eventually brought in a game-changer here, creating a new encapsulation model.

Important note

Container runtimes are client-server applications, so we can serve the runtime to local (by default) and remote clients. When we use a remote runtime, we can use our clients to execute commands on this runtime using different clients, such as docker or nerdctl, depending on the server side. Earlier in this chapter, we mentioned that desktop solutions such as Docker Desktop or Rancher Desktop use this model, running a container runtime server where the common clients, executed from common Linux terminals or Microsoft PowerShell, can manage software containers running on the server side.

Microsoft provided two different software container models:

  • Hyper-V Linux Containers: The old model, which uses a Linux virtual machine
  • Windows Server Containers, also known as Windows Process Containers: This is the new model, allowing the execution of Windows operating-system-based applications

From the user’s perspective, the management and execution of containers running on Windows are the same, no matter which of the preceding models is in use, but only one model can be used per server, thus applying to all containers on that server. The differences here come from the isolation used in each model.

Process isolation on Windows works in the same way it does on Linux. Multiple processes run on a host, accessing the host’s kernel, and the host provides isolation using namespaces and resources control (along with other specific methods, depending on the underlying operating system). As we already know, processes get their own filesystem, network, processes identifiers, and so on, but in this case, they also get their own Windows registry and object namespace.

Due to the very nature of the Microsoft Windows operating system, some system services and dynamic linked libraries (DLLs) are required within the containers and cannot be shared from the host. Thus, process containers need to contain a copy of these resources, which makes Windows images quite a lot bigger than Linux-based container images. You may also encounter some compatibility issues within image releases, depending on which base operating system (files tree) was used to generate it.

The following schema represents both models side by side so that we can observe the main stack differences:

Figure 1.8 – A comparison of Microsoft Windows software container models

Figure 1.8 – A comparison of Microsoft Windows software container models

We will use Windows Server containers when our application requires strong integration with the Microsoft operating system, for example, for integrating Group Managed Service Accounts (gMSA) or encapsulating applications that don’t run under Linux hosts.

From my experience, Windows Server containers became very popular when they initially arrived, but as Microsoft improved the support of their applications for Linux operating systems, the fact that developers could create their applications in .NET Core for either Microsoft Windows or Linux, and the lack of many cloud providers offering this technology, made them almost disappear from the scene.

It is also important to mention that orchestration technology evolution helped developers move to Linux-only containers. Windows Server containers were supported only on top of Docker Swarm until 2019 when Kubernetes announced their support. Due to the large increase of Kubernetes’ adoption in the developer community and even in enterprise environments, Windows Server container usage reduced to very specific and niche use cases.

Nowadays, Kubernetes supports Microsoft Windows Server hosts running as worker roles, allowing process container execution. We will learn about Kubernetes and host roles in Chapter 8, Deploying Applications with the Kubernetes Orchestrator. Despite this fact, you will probably not find many Kubernetes clusters running Windows Server container workloads.

We mentioned that containers improve application security. The next section will show you the improvements at the host and container levels that make containers safer by default.

Improving security using software containers

In this section, we are going to introduce some of the features found on container platforms that help improve application security.

If we keep in mind how containers run, we know that we first need a host with a container runtime. So, having a host with just the software required is the first security measure. We should use dedicated hosts in production for running container workloads. We do not need to concern ourselves with this while developing, but system administrators should prepare production nodes with minimal attack surfaces. We should never share these hosts for use in serving other technologies or services. This feature is so important that we can even find dedicated operating systems, such as Red Hat’s CoreOS, SuSE’s RancherOS, VMware’s PhotonOS, TalOS, or Flatcar Linux, just to mention the most popular ones. These are minimal operating systems that just include a container runtime. You can even create your own by using Moby’s LinuxKit project. Some vendors’ customized Kubernetes platforms, such as Red Hat’s OpenShift, create their clusters using CoreOS, improving the whole environment’s security.

We will never connect to any cluster host to execute containers. Container runtimes work in client-server mode. Rather, we expose this engine service and simply using a client on our laptop or desktop computers will be more than enough to execute containers on the host.

Locally, clients connect to container runtimes using sockets (/var/run/docker.sock for dockerd, for example). Adding read-write access to this socket to specific users will allow them to use the daemon to build, pull, and push images or execute containers. Configuring the container runtime in this way may be worse if the host has a master role in an orchestrated environment. It is crucial to understand this feature and know which users will be able to run containers on each host. System administrators should keep their container runtimes’ sockets safe from untrusted users and only allow authorized access. These sockets are local and, depending on which runtime we are using, TCP or even SSH (in dockerd, for example) can be used to secure remote access. Always ensure Transport Layer Security (TLS) is used to secure socket access.

It is important to note that container runtimes do not provide any role-based access control (RBAC). We will need to add this layer later with other tools. Docker Swarm does not provide RBAC, but Kubernetes does. RBAC is key for managing user privileges and multiple application isolation.

We should say here that, currently, desktop environments (Docker Desktop and Rancher Desktop) also work with this model, in which you don’t connect to the host running the container runtime. A virtualized environment is deployed on your system (using Qemu if on Linux, or Hyper-V or the newer Windows Subsystem for Linux on Windows hosts) and our client, using a terminal, will connect to this virtual container runtime (or the Kubernetes API when deploying workloads on Kubernetes, as we will learn in Chapter 8, Deploying Applications with the Kubernetes Orchestrator).

Here, we have to reiterate that container runtimes add only a subset of kernel capabilities by default to container processes. But this may not be enough in some cases. To improve containers’ security behavior, container runtimes also include a default Secure Computing Mode (Seccomp) profile. Seccomp is a Linux security facility that filters the system calls allowed inside containers. Specific profiles can be included and used by runtimes to add some required system calls. You, as the developer, need to notice when your application requires extra capabilities or uncommon system calls. The special features described in this section are used on host monitoring tools, for example, or if we need to add a new kernel module using system administration containers.

Container runtimes usually run as daemons; thus, they will quite probably run as root users. This means that any container can contain the host’s files inside (we will learn how we can mount volumes and host paths within containers in Chapter 4, Running Docker Containers) or include the host’s namespaces (container processes may access host’s PIDs, networks, IPCs, and so on). To avoid the undesired effects of running container runtime privileges, system administrators should apply special security measures using Linux Security Modules (LSM), such as SELinux or AppArmor, among others.

SELinux should be integrated into container runtimes and container orchestration. These integrations can be used to ensure, for example, that only certain paths are allowed inside containers. If your application requires access to the host’s files, non-default SELinux labels should be included to modify the default runtime behavior. Container runtimes’ software installation packages include these settings, among others, to ensure that common applications will run without problems. However, those with special requirements, such as those that are prepared to read hosts’ logs, will require further security configurations.

So far in this chapter, we have provided a quick overview of the key concepts related to containers. In the following section, we’ll put this into practice.

Labs

In this first chapter, we covered a lot of content, learning what containers are and how they fit into the modern microservices architecture.

In this lab, we will install a fully functional development environment for container-based applications. We will use Docker Desktop because it includes a container runtime, its client, and a minimal but fully functional Kubernetes orchestration solution.

We could use Docker Engine in Linux directly (the container runtime only, following the instructions at https://docs.docker.com/) for most labs but we will need to install a new tool for the Kubernetes labs, which requires a minimal Kubernetes cluster installation. Thus, even for just using the command line, we will use the Docker Desktop environment.

Important note

We will use a Kubernetes desktop environment to minimize CPU and memory requirements. There are even lighter Kubernetes cluster alternatives such as KinD or K3S, but these may require some customization. Of course, you can also use any cloud provider’s Kubernetes environment if you feel more comfortable doing so.

Installing Docker Desktop

This lab will guide you through the installation of Docker Desktop on your laptop or workstation and how to execute a test to verify that it works correctly.

Docker Desktop can be installed on Microsoft Windows 10, most of the common Linux flavors, and macOS (the arm64 and amd64 architectures are both supported). This lab will show you how to install this software on Windows 10, but I will use Windows and Linux interchangeably in other labs as they mostly work the same – we will review any differences between the platforms when required.

We will follow the simple steps documented at https://docs.docker.com/get-docker/. Docker Desktop can be deployed on Windows using Hyper-V or the newer Windows Subsystem for Linux 2 (WSL 2). This second option uses less compute and memory resources and is nicely integrated into Microsoft Windows, making it the preferred installation method, but note that WSL2 is required on your host before installing Docker Desktop. Please follow the instructions from Microsoft at https://learn.microsoft.com/en-us/windows/wsl/install before installing Docker Desktop. You can install any Linux distribution because the integration will be automatically included.

We will use the Ubuntu WSL distribution. It is available from the Microsoft Store and is simple to install:

Figure 1.9 – Ubuntu in the Microsoft Store

Figure 1.9 – Ubuntu in the Microsoft Store

During the installation, you will be prompted for username and password details for this Windows subsystem installation:

Figure 1.10 – After installing Ubuntu, you will have a fully functional Linux Terminal

Figure 1.10 – After installing Ubuntu, you will have a fully functional Linux Terminal

You can close this Ubuntu Terminal as the Docker Desktop integration will require you to open a new one once it has been configured.

Important note

You may need to execute some additional steps at https://docs.microsoft.com/windows/wsl/wsl2-kernel to update WSL2 if your operating system hasn’t been updated.

Now, let’s continue with the Docker Desktop installation:

  1. Download the installer from https://docs.docker.com/get-docker/:

Figure 1.11 – Docker Desktop download section

Figure 1.11 – Docker Desktop download section

  1. Once downloaded, execute the Docker Desktop Installer.exe binary. You will be asked to choose between Hyper-V or WSL2 backend virtualization; we will choose WSL2:
Figure 1.12 – Choosing the WSL2 integration for better performance

Figure 1.12 – Choosing the WSL2 integration for better performance

  1. After clicking Ok, the installation process will begin decompressing the required files (libraries, binaries, default configurations, and so on). This could take some time (1 to 3 minutes), depending on your host’s disk speed and compute resources:
Figure 1.13 – The installation process will take a while as the application files are decompressed and installed on your system

Figure 1.13 – The installation process will take a while as the application files are decompressed and installed on your system

  1. To finish the installation, we will be asked to log out and log in again because our user was added to new system groups (Docker) to enable access to the remote Docker daemon via operating system pipes (similar to Unix sockets):
Figure 1.14 – Docker Desktop has been successfully installed and we must log out

Figure 1.14 – Docker Desktop has been successfully installed and we must log out

  1. Once we log in, we can execute Docker Desktop using the newly added application icon. We can enable Docker Desktop execution on start, which could be very useful, but it may slow down your computer if you are short on resources. I recommend starting Docker Desktop only when you are going to use it.

    Once we’ve accepted the Docker Subscription license terms, Docker Desktop will start. This may take a minute:

Figure 1.15 – Docker Desktop is starting

Figure 1.15 – Docker Desktop is starting

You can skip the quick guide that will appear when Docker Desktop is running because we will learn more about this in the following chapters as we deep dive into building container images and container execution.

  1. We will get the following screen, showing us that Docker Desktop is ready:
Figure 1.16 – Docker Desktop main screen

Figure 1.16 – Docker Desktop main screen

  1. We need to enable WSL2 integration with our favorite Linux distribution:
Figure 1.17 – Enabling our previously installed Ubuntu using WSL2

Figure 1.17 – Enabling our previously installed Ubuntu using WSL2

  1. After this step, we are finally ready to work with Docker Desktop. Let’s open a terminal using our Ubuntu distribution, execute docker, and, after that, docker info:
Figure 1.18 – Executing some Docker commands just to verify container runtime integration

Figure 1.18 – Executing some Docker commands just to verify container runtime integration

As you can see, we have a fully functional Docker client command line associated with the Docker Desktop WSL2 server.

  1. We will end this lab by executing an Alpine container (a small Linux distribution), reviewing its process tree and the list of its root filesystem.

    We can execute docker run-ti alpine to download the Alpine image and execute a container using it:

Figure 1.19 – Creating a container and executing some commands inside before exiting

Figure 1.19 – Creating a container and executing some commands inside before exiting

  1. This container execution left changes in Docker Desktop; we can review the current images present in our container runtime:

Figure 1.20 – Docker Desktop – the Images view

Figure 1.20 – Docker Desktop – the Images view

  1. We can also review the container, which is already dead because we exited by simply executing exit inside its shell:
Figure 1.21 – Docker Desktop – the Containers view

Figure 1.21 – Docker Desktop – the Containers view

Now, Docker Desktop works and we are ready to work through the following labs using our WSL2 Ubuntu Linux distribution.

Summary

In this chapter, we learned the basics around containers and how they fit into modern microservices applications. The content presented in this chapter has helped you understand how to implement containers in distributed architectures, using already-present host operating system isolation features and container runtimes, which are the pieces of software required for building, sharing, and executing containers.

Software containers assist application development by providing resilience, high availability, scalability, and portability thanks to their very nature, and will help you create and manage the application life cycle.

In the next chapter, we will deep dive into the process of creating container images.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Gain a clear understanding of software containers from the SecDevOps perspective
  • Master the construction of application pieces within containers to achieve a seamless life cycle
  • Prepare your applications to run smoothly and with ease in complex container orchestrators
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

Developers are changing their deployment artifacts from application binaries to container images, giving rise to the need to build container-based apps as part of their new development workflow. Managing an app’s life cycle is complex and requires effort—this book will show you how to efficiently develop, share, and execute applications. You’ll learn how to automate the build and delivery process using CI/CD tools with containers as container orchestrators manage the complexity of running cluster-wide applications, creating infrastructure abstraction layers, while your applications run with high availability, resilience, and persistence. As you advance, you’ll develop, test, and debug applications on your desktop and get them ready to run in production with optimal security standards, using deployment patterns and monitoring tools to help identify common issues. You’ll also review deployment patterns that’ll enable you to solve common deployment problems, providing high availability, scalability, and security to your applications. Finally, you’ll explore different solutions to monitor, log, and instrument your applications as per open-source community standards. By the end of this book, you’ll be able to manage your app’s life cycle by implementing CI/CD workflows using containers to automate the building and delivery of its components.

Who is this book for?

This book is for developers and DevOps engineers looking to learn about the implementation of containers in application development, especially DevOps engineers who deploy, monitor, and maintain container-based applications running on orchestrated platforms. In general, this book is for IT professionals who want to understand Docker container-based applications and their deployment. A basic understanding of coding and frontend-backend architectures is needed to follow the examples presented in this book.

What you will learn

  • Find out how to build microservices-based applications using containers
  • Deploy your processes within containers using Docker features
  • Orchestrate multi-component applications on standalone servers
  • Deploy applications cluster-wide in container orchestrators
  • Solve common deployment problems such as persistency or app exposure using best practices
  • Review your application's health and debug it using open-source tools
  • Discover how to orchestrate CI/CD workflows using containers
Estimated delivery fee Deliver to Belgium

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Nov 28, 2023
Length: 490 pages
Edition : 1st
Language : English
ISBN-13 : 9781805127987
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Colour book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Belgium

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Publication date : Nov 28, 2023
Length: 490 pages
Edition : 1st
Language : English
ISBN-13 : 9781805127987
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 113.97
The Ultimate Docker Container Book
€37.99
Mastering Kubernetes
€41.99
Containers for Developers Handbook
€33.99
Total 113.97 Stars icon
Banner background image

Table of Contents

19 Chapters
Part 1:Key Concepts of Containers Chevron down icon Chevron up icon
Chapter 1: Modern Infrastructure and Applications with Docker Chevron down icon Chevron up icon
Chapter 2: Building Docker Images Chevron down icon Chevron up icon
Chapter 3: Sharing Docker Images Chevron down icon Chevron up icon
Chapter 4: Running Docker Containers Chevron down icon Chevron up icon
Chapter 5: Creating Multi-Container Applications Chevron down icon Chevron up icon
Part 2:Container Orchestration Chevron down icon Chevron up icon
Chapter 6: Fundamentals of Container Orchestration Chevron down icon Chevron up icon
Chapter 7: Orchestrating with Swarm Chevron down icon Chevron up icon
Chapter 8: Deploying Applications with the Kubernetes Orchestrator Chevron down icon Chevron up icon
Part 3:Application Deployment Chevron down icon Chevron up icon
Chapter 9: Implementing Architecture Patterns Chevron down icon Chevron up icon
Chapter 10: Leveraging Application Data Management in Kubernetes Chevron down icon Chevron up icon
Chapter 11: Publishing Applications Chevron down icon Chevron up icon
Chapter 12: Gaining Application Insights Chevron down icon Chevron up icon
Part 4:Improving Applications’ Development Workflow Chevron down icon Chevron up icon
Chapter 13: Managing the Application Life Cycle Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(5 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
BrettHargreaves Jan 29, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
As a software architect I always need to keep abreast of technologies, and containerisation is a crucial one to understand. Unfortunately, a lot of guides on the subject are either too basic, or assume too much base level knowledge.The Containers for Developers Handbook by Francisco Javier Ramirez Urea is great because it covers the entire subject step by step. It starts with a basic introduction to what containers are and why they were developed, before quickly explaining hot to build, share and run your own images.You are then introduced to Orchestration - a key concept for modern applications, and then finishes off by delving into best practices around architecture patterns and publishing & monitoring container based apps.A truly comprehensive guide, and highly recommended!
Amazon Verified review Amazon
Tomica Kaniski Dec 29, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
If you are a software developer (or even if you are not), you may be struggling with containers and containerization and probably looking for a "one guide to help with it all" - this may be the guide you are looking for! This book is a simple, practical guide on containers and container orchestration in modern application development and publishing, which every developer should at least skim through. All in all, it is a very nice collection of practical explanations and tips and I would recommend it to anyone working with containerized apps.
Amazon Verified review Amazon
Tiny Dec 27, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Containers, Kubernetes, Docker, Helm, micro-services, service mesh, orchestration—Bingo!. These terms all seem like buzzword elements, but the “Containers for Developers Handbook” (Packt, 2023) by Francisco Javier Ramirez Urea is one of the best references I’ve come across to build skills and knowledge for these complicated areas. Every chapter includes a full lab section to practice some more challenging aspects. Chock full of code examples and diagrams includes four sections, discussing key concepts, orchestration, deployment, and managing application lifecycles. I heartily recommend this book to anyone dealing with containers on a regular basis. The first section covers all the basic introductory elements, from a history of containers to getting your first products out the door. Throughout, the book uses the OSI model to show how containers interact with the system and connect to hardware as well as other containers. The book uses basic Docker image standards to show you how to build and share effective tools. Further, the section introduces Docker Compose to adjust to deploying either multiple containers or sequenced events where one container depends on others. The next section discusses why one might use orchestration for dependency resolution, status, software-implemented circuit breakers, scalability, and high availability. The high availability section delves into numerous issues necessary for modern applications. After a quick review of the different possible orchestrators, a chapter is spent on the Kubernetes orchestration as well as Docker Swarm. Each is compared to the other, highlighting strengths and weaknesses as well as personal preferences between the two. As with every chapter, these include labs with graphics and code so you can test exactly how your containers might work across different environments. I’d combine the last two sections for deployment and the application life cycle. The deployment piece covers low-level architecture within the system, managing data, and then gaining observability. Observing multiple containers can always be an issue, especially in terms of aggregating logs, but the book does an excellent job in introducing Prometheus, linked to Grafana and OTEL to ensure you don’t miss anything that might impact container performance. Finally, the book hits on some preferred patterns with Waterfall, Agile, and Spiral before introducing DevOps CI/CD practices into a detailed discussionA minor complaint: the book is highly technical in a way that can make it challenging to new developers. The author makes significant efforts to explain the various paths and outcomes but sometimes these can be challenging to visualize. Some aspects of containers can only be experienced by doing and that is not the author’s fault. If anything, he tries to make up for the small gaps through various labs. Each lab section lasts 10-15 pages in every chapter with a set problem, discuss approaches, and walks the reader through basic solutions.Overall, “Containers for Developers Handbook” (Packt, 2023) is an excellent reference and training tool for any organization working with containers. The explanations are outstanding, the code examples are solid, and most instances include the CLI and GUI approaches through the various tools. If you work with containers in any aspect, I recommend reading this book to supplement your knowledge, and if you haven’t, this still offers an excellent place to get started.
Amazon Verified review Amazon
William Francillette Jan 07, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is basically a complete course!The author Francisco Javier Ramírez Urea starts by Containers fundamentals, and gradually guide us towards orchestration and how to host and manage highly available and resilient applications.Every chapter is illustrated with a lab and examples available from a GitHub repository.The author’s expertise permeates the book, describing components and protocols in detail with practical tools and tips for implementing Docker, Swarm, and Kubernetes.He also provides valuable insights into publishing and managing the application lifecycle, adhering to best practices and security recommendations.
Amazon Verified review Amazon
jml Dec 13, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Containers for Developers Handbook “contains” pretty much everything one could want to know regarding the theory and practice of Docker-based software operations. Starting from the build process and covering security, orchestration, housekeeping, and more, the book provides a comprehensive overview of software containerization. Extensive coverage of Kubernetes and a CI/CD workflow for the development lifecycle rounds out the volume. Each chapter ends with a lab section and a summary of the content covered. Unlike the usual “here’s a list of questions, go check the answers at the end” or “here’s a list of instructions, hope it works for you” appendices, the labs in Containers for Developers are actually useful. Step-by-step instructions are provided with explanations of exactly what’s going on during each operation. Even better, the expected outputs are provided, so those who are reading the book without access to a lab (or using this as a reference) can still get tremendous value out of the lab sections; I wish more authors would follow this approach. The only thing missing from this book is a “Troubleshooting Containers” section; there are hints and so forth scattered throughout, but it would be nice to have a container-specific methodology and toolset included for those who encounter problems during everyday Docker and Kubernetes operation. All in all, though, Containers for Developers is an excellent learning tool and reference for anyone involved with software containerization — it’s definitely not just for developers!
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela