Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Learn OpenShift
Learn OpenShift

Learn OpenShift: Deploy, build, manage, and migrate applications with OpenShift Origin 3.9

Arrow left icon
Profile Icon Aleksey Usov Profile Icon Artemii Kropachev Profile Icon Denis Zuev
Arrow right icon
AU$60.99
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3 (2 Ratings)
Paperback Jul 2018 504 pages 1st Edition
eBook
AU$33.99 AU$48.99
Paperback
AU$60.99
Subscription
Free Trial
Renews at AU$24.99p/m
Arrow left icon
Profile Icon Aleksey Usov Profile Icon Artemii Kropachev Profile Icon Denis Zuev
Arrow right icon
AU$60.99
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3 (2 Ratings)
Paperback Jul 2018 504 pages 1st Edition
eBook
AU$33.99 AU$48.99
Paperback
AU$60.99
Subscription
Free Trial
Renews at AU$24.99p/m
eBook
AU$33.99 AU$48.99
Paperback
AU$60.99
Subscription
Free Trial
Renews at AU$24.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Learn OpenShift

Containers and Docker Overview

This book is much more than just the fundamentals of OpenShift. It's about the past, present, and the future of microservices and containers in general. In this book, we are going to cover OpenShift and its surroundings; this includes topics such as the fundamentals of containers, Docker basics, and studying sections where we will work with both Kubernetes and OpenShift in order to feel more comfortable with them.

During our OpenShift journey, we will walk you through all the main and most of the advanced components of OpenShift. We are going to cover OpenShift security and networking and also application development for OpenShift using the most popular and built-in OpenShift DevOps tools, such as CI/CD with Jenkins and Source-to-Image (S2I) in conjunction with GitHub.

We will also learn about the most critical part for every person who would like to actually implement OpenShift in their company—the design part. We are going to show you how to properly design and implement OpenShift, examining the most common mistakes made by those who have just started working with OpenShift.

The chapter is focused on container and Docker technologies. We will describe container concepts and Docker basics, from the architecture to low-level technologies. In this chapter, we will learn how to use Docker CLI and manage Docker containers and Docker images. A significant part of the chapter is focused on building and running Docker container images. As a part of the chapter, you are asked to develop a number of Dockerfiles and to containerize several applications.

In this chapter, we will look at the following:

  • Containers overview
  • Docker container architecture
  • Understanding Docker images and layers
  • Understanding Docker Hub and Docker registries
  • Installing and configuring Docker software
  • Using the Docker command line
  • Managing images via Docker CLI
  • Managing containers via Docker CLI
  • Understanding the importance of environment variables inside Docker containers
  • Managing persistent storage for Docker containers
  • Building a custom Docker image

Technical requirements

In this chapter, we are going to use the following technologies and software:

  • Vagrant
  • Bash Shell
  • GitHub
  • Docker
  • Firefox (recommended) or any other browser

The Vagrant installation and all the code we use in this chapter are located on GitHub at https://github.com/PacktPublishing/Learn-OpenShift.

Instructions on how to install and configure Docker are provided in this chapter as we learn.

Bash Shell will be used as a part of your virtual environment based on CentOS 7.

Firefox or any other browser can be used to navigate through Docker Hub.

As a prerequisite, you will need a stable internet connection from your laptop.

Containers overview

Traditionally, software applications were developed following a monolithic architecture approach, meaning all the services or components were locked to each other. You could not take out a part and replace it with something else. That approach changed over time and became the N-tier approach. The N-tier application approach is one step forward in container and microservices architecture.

The major drawbacks of the monolith architecture were its lack of reliability, scalability, and high availability. It was really hard to scale monolith applications due to their nature. The reliability of these applications was also questionable because you could rarely easily operate and upgrade these applications without any downtime. There was no way you could efficiently scale out monolith applications, meaning you could not just add another one, five, or ten applications back to back and let them coexist with each other.

We had monolith applications in the past, but then people and companies started thinking about application scalability, security, reliability, and high availability (HA). And that is what created N-tier design. The N-tier design is a standard application design like 3-tier web applications where we have a web tier, application tier, and database backend. It's pretty standard. Now it is all evolving into microservices. Why do we need them? The short answer is for better numbers. It's cheaper, much more scalable, and secure. Containerized applications bring you to a whole new level and this is where you can benefit from automation and DevOps.

Containers are a new generation of virtual machines. That brings software development to a whole new level. Containers are an isolated set of different rules and resources inside a single operating system. This means that containers can provide the same benefits as virtual machines but use far less CPU, memory, and storage. There are several popular container providers including LXC, Rockt, and Docker, which we are going to focus on this book.

Container features and advantages

This architecture brings a lot of advantages to software development.

Some of the major advantages of containers are as follows:

  • Efficient hardware resource consumption
  • Application and service isolation
  • Faster deployment
  • Microservices architecture
  • The stateless nature of containers

Efficient hardware resource consumption

Whether you run containers natively on a bare-metal server or use virtualization techniques, using containers allows you to utilize resources (CPU, memory, and storage) in a better and much more efficient manner. In the case of a bare-metal server, containers allow you to run tens or even hundreds of the same or different containers, providing better resource utilization in comparison to usually one application running on a dedicated server. We have seen in the past that some server utilization at peak times is only 3%, which is a waste of resources. And if you are going to run several of the same or different applications on the same servers, they are going to conflict with each other. Even if they work, you are going to face a lot of problems during day-to-day operation and troubleshooting.

If you are going to isolate these applications by introducing popular virtualization techniques such as KVM, VMware, XEN, or Hyper-V, you will run into a different issue. There is going to be a lot of overhead because, in order to virtualize your app using any hypervisor, you will need to install an operating system on top of your hypervisor OS. This operating system needs CPU and memory to function. For example, each VM has its own kernel and kernel space associated with it. A perfectly tuned container platform can give you up to four times more containers in comparison to standard VMs. It may be insignificant when you have five or ten VMs, but when we talk hundreds or thousands, it makes a huge difference.

Application and service isolation

Imagine a scenario where we have ten different applications hosted on the same server. Each application has a number of dependencies (such as packages, libraries, and so on). If you need to update an application, usually it involves updating the process and its dependencies. If you update all related dependencies, most likely it will affect the other application and services. It may cause these applications not to work properly. Sure, to a degree these issues are addressed by environment managers such as virtualenv for Python and rbenv/rvm for Ruby—and dependencies on shared libraries can be isolated via LD_LIBRARY_PATH—but what if you need different versions of the same package? Containers and virtualization solve that issue. Both VMs and containers provide environment isolation for your applications.

But, in comparison to bare-metal application deployment, container technology (for example, Docker) provides an efficient way to isolate applications, and other computer resources libraries from each other. It not only provides these applications with the ability to co-exist on the same OS, but also provides efficient security, which is a big must for every customer-facing and content-sensitive application. It allows you to update and patch your containerized applications independently of each other.

Faster deployment

Using container images, discussed later in this book, allows us speed up container deployment. We are talking about seconds to completely restart a container versus minutes or tens of minutes with bare-metal servers and VMs. The main reason for this is that a container does not need to restart the whole OS, it just needs to restart the application itself.

Microservices architecture

Containers bring application deployment to a whole new level by introducing microservices architecture. What it essentially means is that, if you have a monolith or N-tier application, it usually has many different services communicating with each other. Containerizing your services allows you to break down your application into multiple pieces and work with each of them independently. Let's say you have a standard application that consists of a web server, application, and database. You can probably put it on one or three different servers, three different VMs, or three simple containers, running each part of this application. All these options require a different amount of effort, time, and resources. Later in this book, you will see how simple it is to do using containers.

The stateless nature of containers

Containers are stateless, which means that you can bring containers up and down, create or destroy them at any time, and this will not affect your application performance. That is one of the greatest features of containers. We are going to delve into this later in this book.

Docker container architecture

Docker is one of the most popular application containerization technologies these days. So why do we want to use Docker if there are other container options available? Because collaboration and contribution are key in the era of open source, and Docker has made many different things that other technologies have not been able to in this area.

For example, Docker partnered with other container developers such as Red Hat, Google, and Canonical to jointly work on its components. Docker also contributed it's software container format and runtime to the Linux Foundation's open container project. Docker has made containers very easy to learn about and use.

Docker architecture

As we mentioned already, Docker is the most popular container platform. It allows for creating, sharing, and running applications inside Docker containers. Docker separates running applications from the infrastructure. It allows you to speed up the application delivery process drastically. Docker also brings application development to an absolutely new level. In the diagram that follows, you can see a high-level overview of the Docker architecture:

Docker architecture

Docker uses a client-server type of architecture:

  • Docker server: This is a service running as a daemon in an operating system. This service is responsible for downloading, building, and running containers.
  • Docker client: The CLI tool is responsible for communicating with Docker servers using the REST API.

Docker's main components

Docker uses three main components:

  • Docker containers: Isolated user-space environments running the same or different applications and sharing the same host OS. Containers are created from Docker images.
  • Docker images: Docker templates that include application libraries and applications. Images are used to create containers and you can bring up containers immediately. You can create and update your own custom images as well as download build images from Docker's public registry.
  • Docker registries: This is a images store. Docker registries can be public or private, meaning that you can work with images available over the internet or create your own registry for internal purposes. One popular public Docker registry is Docker Hub, discussed later in this chapter.

Linux containers

As mentioned in the previous section, Docker containers are secured and isolated from each other. In Linux, Docker containers use several standard features of the Linux kernel. This includes:

  • Linux namespaces: It is a feature of Linux kernel to isolate resources from each other. This allows one set of Linux processes to see one group of resources while allowing another set of Linux processes to see a different group of resources. There are several kinds of namespaces in Linux: Mount (mnt), Process ID (PID), Network (net), User ID (user), Control group (cgroup), and Interprocess Communication (IPC). The kernel can place specific system resources that are normally visible to all processes into a namespace. Inside a namespace, a process can see resources associated with other processes in the same namespace. You can associate a process or a group of processes with their own namespace or, if using network namespaces, you can even move a network interface to a network namespace. For example, two processes in two different mounted namespaces may have different views of what the mounted root file system is. Each container can be associated with a specific set of namespaces, and these namespaces are used inside these containers only.
  • Control groups (cgroups): These provide an effective mechanism for resource limitation. With cgroups, you can control and manage system resources per Linux process, increasing overall resource utilization efficiency. Cgroups allow Docker to control resource utilization per container.
  • SELinux: Security Enhanced Linux (SELinux) is mandatory access control (MAC) used for granular system access, initially developed by the National Security Agency (NSA). It is an additional security layer for Debian and RHEL-based distributions like Red Hat Enterprise Linux, CentOS, and Fedora. Docker uses SELinux for two main reasons: host protection and to isolate containers from each other. Container processes run with limited access to the system resources using special SELinux rules.

The beauty of Docker is that it leverages the aforementioned low-level kernel technologies, but hides all complexity by providing an easy way to manage your containers.

Understanding Docker images and layers

A Docker image is a read-only template used to build containers. An image consists of a number of layers that are combined into a single virtual filesystem accessible for Docker applications. This is achieved by using a special technique which combines multiple layers into a single view. Docker images are immutable, but you can add an extra layer and save them as a new image. Basically, you can add or change the Docker image content without changing these images directly. Docker images are the main way to ship, store, and deliver containerized applications. Containers are created using Docker images; if you do not have a Docker image, you need to download or build one.

Container filesystem

The container filesystem, used for every Docker image, is represented as a list of read-only layers stacked on top of each other. These layers eventually form a base root filesystem for a container. In order to make it happen, different storage drivers are being used. All the changes to the filesystem of a running container are done to the top level image layer of a container. This layer is called a Container layer. What it basically means is that several containers may share access to the same underlying level of a Docker image, but write the changes locally and uniquely to each other. This process is shown in the following diagram:

Docker layers

Docker storage drivers

A Docker storage driver is the main component to enable and manage container images. Two main technologies are used for that—copy-on-write and stackable image layers. The storage driver is designed to handle the details of these layers so that they interact with each other. There are several drivers available. They do pretty much the same job, but each and every one of them does it differently. The most common storage drivers are AUFS, Overlay/Overlay2, Devicemapper, Btrfs, and ZFS. All storage drivers can be categorized into three different types:

Storage driver category

Storage drivers

Union filesystems

AUFS, Overlay, Overlay2

Snapshotting filesystems

Btrfs, ZFS

Copy-on-write block devices

Devicemapper

Container image layers

As previously mentioned, a Docker image contains a number of layers that are combined into a single filesystem using a storage driver. The layers (also called intermediate images) are generated when commands are executed during the Docker image build process. Usually, Docker images are created using a Dockerfile, the syntax of which will be described later. Each layer represents an instruction in the image's Dockerfile.

Each layer, except the very last one, is read-only:

Docker image layers

A Docker image usually consists of several layers, stacked one on top of the other. The top layer has read-write permissions, and all the remaining layers have read-only permissions. This concept is very similar to the copy-on-write technology. So, when you run a container from the image, all the changes are done to this top writable layer.

Docker registries

As mentioned earlier, a Docker image is a way to deliver applications. You can create a Docker image and share it with other users using a public/private registry service. A registry is a stateless, highly scalable server-side application which you can use to store and download Docker images. Docker registry is an open source project, under the permissive Apache license. Once the image is available on a Docker registry service, another user can download it by pulling the image and can use this image to create new Docker images or run containers from this image.

Docker supports several types of docker registry:

  • Public registry
  • Private registry

Public registry

You can start a container from an image stored in a public registry. By default, the Docker daemon looks for and downloads Docker images from Docker Hub, which is a public registry provided by Docker. However, many vendors add their own public registries to the Docker configuration at installation time. For example, Red Hat has its own proven and blessed public Docker registry which you can use to pull Docker images and to build containers.

Private registry

Some organization or specific teams don't want to share their custom container images with everyone for a reason. They still need a service to share Docker images, but just for internal usage. In that case, a private registry service can be useful. A private registry can be installed and configured as a service on a dedicated server or a virtual machine inside your network.

You can easily install a private Docker registry by running a Docker container from a public registry image. The private Docker registry installation process is no different from running a regular Docker container with additional options.

Accessing registries

A Docker registry is accessed via the Docker daemon service using a Docker client. The Docker command line uses a RESTful API to request process execution from the daemon. Most of these commands are translated into HTTP requests and may be transmitted using curl.

The process of using Docker registries is shown in the following section.

A developer can create a Docker image and put it into a private or public registry. Once the image is uploaded, it can be immediately used to run containers or build other images.

Docker Hub overview

Docker Hub is a cloud-based registry service that allows you to build your images and test them, push these images, and link to Docker cloud so you can deploy images on your hosts. Docker Hub provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.

Docker Hub is the public registry managed by the Docker project, and it hosts a large set of container images, including those provided by major open source projects, such as MySQL, Nginx, Apache, and so on, as well as customized container images developed by the community.

Docker Hub provides some of the following features:

  • Image repositories: You can find and download images managed by other Docker Hub users. You can also push or pull images from private image libraries you have access to.
  • Automated builds: You can automatically create new images when you make changes to a source code repository.
  • Webhooks: The action trigger that allows you to automate builds when there is a push to a repository.
  • Organizations: The ability to create groups and manage access to image repositories.

In order to start working with Docker Hub, you need to log in to Docker Hub using a Docker ID. If you do not have one, you can create your Docker ID by following the simple registration process. It is completely free. The link to create your Docker ID if you do not have one is https://hub.docker.com/.

You can search for and pull Docker images from Docker Hub without logging in; however, to push images you must log in. Docker Hub gives you the ability to create public and private repositories. Public repositories will be publicly available for anyone and private repositories will be restricted to a set of users of organizations.

Docker Hub contains a number of official repositories. These are public, certified repositories from different vendors and Docker contributors. It includes vendors like Red Hat, Canonical, and Oracle.

Docker installation and configuration

Docker software is available in two editions: Community Edition (CE) and Enterprise Edition (EE).

Docker CE is a good point from which to start learning Docker and using containerized applications. It is available on different platforms and operating systems. Docker CE comes with an installer so you can start working with containers immediately. Docker CE is integrated and optimized for infrastructure so you can maintain a native app experience while getting started with Docker.

Docker Enterprise Edition (EE) is a Container-as-a-Service (CaaS) platform for IT that manages and secures diverse applications across disparate infrastructures, both on-premises and in a cloud. In other words, Docker EE is similar to Docker CE in that it is supported by Docker Inc.

Docker software supports a number of platforms and operating systems. The packages are available for most popular operating systems such as Red Hat Enterprise Linux, Fedora Linux, CentOS, Ubuntu Linux, Debian Linux, macOS, and Microsoft Windows.

Docker installation

The Docker installation process is dependent on the particular operating system. In most cases, it is well described on the official Docker portal—https://docs.docker.com/install/. As a part of this book, we will be working with Docker software on CentOS 7.x. Docker installation and configuration on other platforms is not part of this book. If you still need to install Docker on another operating system, just visit the official Docker web portal.

Usually, the Docker node installation process looks like this:

  1. Installation and configuration of an operating system
  2. Docker packages installation
  3. Configuring Docker settings
  4. Running the Docker service
We assume that our readers have sufficient knowledge to install and configure a CentOS-based virtual machine (VM) or bare-metal host. If you do not know how to use Vagrant, please follow the guidelines at https://www.vagrantup.com/intro/getting-started/.

Once you properly install Vagrant on your system, just run vagrant init centos/7 followed by vagrant up. You can verify whether vagrant is up with the vagrant status command, and finally you can ssh into VM by using vagrant ssh command.
Since Docker is supported on even the most popular OSes, you have an option to install Docker directly on your desktop OS. We advise you to either use Vagrant or any other virtualization provider such as VMware or KVM, because we have done all the tests inside the virtual environment on CentOS 7. If you still want to install Docker on your desktop OS, follow the link: https://docs.docker.com/install/.

Docker CE is available on CentOS 7 with standard repositories. The installation process is focused on the docker package installation:

# yum install docker -y
...
output truncated for brevity
...
Installed:
docker.x86_64 2:1.12.6-71.git3e8e77d.el7.centos.1
Dependency Installed:
...
output truncated for brevity
...

Once the installation is completed, you need to run the Docker daemon to be able to manage your containers and images. On RHEL7 and CentOS 7, this just means starting the Docker service like so:

# systemctl start docker
# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

You can verify that your Docker daemon works properly by showing Docker information provided by the docker info command:

# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
...
output truncated for brevity
...
Registries: docker.io (secure)

Docker configuration

Docker daemon configuration is managed by the Docker configuration file (/etc/docker/daemon.json) and Docker daemon startup options are usually controlled by the systemd unit named Docker. On Red Hat-based operating systems, some configuration options are available at /etc/sysconfig/docker and /etc/sysconfig/docker-storage. Modification of the mentioned file will allow you to change Docker parameters such as the UNIX socket path, listen on TCP sockets, registry configuration, storage backends, and so on.

Using the Docker command line

In order to start using Docker CLI, you need to configure and bring up a Vagrant VM. If you are using macOS, the configuration process using Vagrant will look like this:

$ mkdir vagrant; cd vagrant
$ cat Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = "centos/7"
config.vm.hostname = 'node1.example.com'
config.vm.network "private_network", type: "dhcp"
config.vm.provision "shell", inline: "groupadd docker; usermod -aG docker vagrant; yum install docker -y; systemctl enable docker; systemctl start docker"
end
$ vagrant up
$ vagrant ssh

Using Docker man, help, info

The Docker daemon listens on unix:///var/run/docker.sock but you can bind Docker to another host/port or a Unix socket. The Docker client (the docker utility) uses the Docker API to interact with the Docker daemon.

The Docker client supports dozens of commands, each with numerous options, so an attempt to list them all would just result in a copy of the CLI reference from the official documentation. Instead, we will provide you with the most useful subsets of commands to get you up and running.

You can always check available man pages for all Docker sub-commands using:

$ man -k docker

You will be able to see a list of man pages for Docker and all the sub-commands available:

$ man docker
$ man docker-info
$ man Dockerfile

Another way to get information regarding a command is to use docker COMMAND --help:

# docker info --help
Usage: docker info
Display system-wide information
--help Print usage

The docker utility allows you to manage container infrastructure. All sub-commands can be grouped as follows:

Activity type

Related subcommands

Managing images

search, pull, push, rmi, images, tag, export, import, load, save

Managing containers

run, exec, ps, kill, stop, start

Building custom images

build, commit

Information gathering

info, inspect

Managing images using Docker CLI

The first step in running and using a container on your server or laptop is to search and pull a Docker image from the Docker registry using the docker search command.

Let's search for the web server container. The command to do so is:

$ docker search httpd
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
httpd ... 1569 [OK]
hypriot/rpi-busybox-httpd ... 40
centos/httpd 15 [OK]
centos/httpd-24-centos7 ... 9

Alternatively, we can go to https://hub.docker.com/ and type httpd in the search window. It will give us something similar to the docker search httpd results:

Docker Hub search results

Once the container image is found, we can pull this image from the Docker registry in order to start working with it. To pull a container image to your host, you need to use the docker pull command:

$ docker pull httpd

The output of the preceding command is as follows:

Note that Docker uses concepts from union filesystem layers to build Docker images. This is why you can see seven layers being pulled from Docker Hub. One stacks up onto another, building a final image.

By default, Docker will try to pull the image with the latest tag, but we can also download an older, more specific version of an image we are interested in using different tags. The best way to quickly find available tags is to go to https://hub.docker.com/, search for the specific image, and click on the image details:

Docker Hub image details

There we are able to see all the image tags available for us to pull from Docker Hub. There are ways to achieve the same goal using the docker search CLI command, which we are going to cover later in this book.

$ docker pull httpd:2.2.29

The output of the preceding code should look something like the following:

You may notice that the download time for the second image was significantly lower than for the first image. It happens because the first image we pulled (docker:latest) has most layers in common with the second image (httpd:2.2.29). So there is no need to download all the layers again. This is very useful and saves a lot of time in large environments.

Working with images

Now we want to check the images available on our local server. To do this, we can use the docker images command:

$ docker images

The output of the preceding command will be as shown in the following screenshot:


If we downloaded a wrong image, we can always delete it from the local server by using the docker rmi command: ReMove Image (RMI). In our case, we have two versions of the same image, so we can specify a tag for the image we want to delete:

$ docker rmi httpd:2.2.29

The output of the preceding command will be as shown in the following screenshot:

At this point, we have only one image left, which is httpd:latest:

$ docker images

The output of the preceding command will be as shown in the following screenshot:


Saving and loading images

The Docker CLI allows us to export and import Docker images and container layers using export/import or save/load Docker commands. The difference between save/load and export/import is that the first one works with images including metadata, but the export/import combination uses only container layers and doesn't include any image metadata information such as name, tags, and so on. In most cases, the save/load combination is more relevant and works properly for images without special needs. The docker save command packs the layers and metadata of all the chains required to build the image. You can then load this saved images chain into another Docker instance and create containers from these images.

The docker export will fetch the whole container, like a snapshot of a regular VM. It saves the OS, of course, but also any change a you made and any data file written during the container life. This one is more like a traditional backup:

$ docker save httpd -o httpd.tar

$ ls -l httpd.tar

To load the image back from the file, we can use the docker load command. Before we do that, though, let's remove the httpd image from the local repository first:

$ docker rmi httpd:latest

The output of the preceding command will be as shown in the following screenshot:

We verify that we do not have any images in the local repository:

 $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

Load the image file we previously saved with the docker save command. Like docker export and docker import, this command forms a pair with Docker save and thus is used for loading a saved container archive with all intermediate layers and metadata to the Docker cache:

$ docker load -i httpd.tar

The output of the preceding command will be as shown in the following screenshot:

Check the local docker images with docker image command:

$ docker images

The output of the preceding command will be as shown in the following screenshot:

Uploading images to the Docker registry

Now we know how to search, pull, remove, save, load, and list available images. The last piece we are missing is how to push images back to Docker Hub or a private registry.

To upload an image to Docker Hub, we need to do a few tricks and follow these steps:

  1. Log in to Docker Hub:
$ docker login
Username: #Enter your username here
Password: #Enter your password here
Login Succeeded
  1. Copy the Docker image you want to push to a different path in the Docker repository on your server:
$ docker tag httpd:latest flashdumper/httpd:latest
Note that flashdumper is your Docker Hub username.
  1. Finally, push the copied image back to Docker Hub:
$ docker push flashdumper/httpd:latest

The output of the preceding command will be as shown in the following screenshot:

Now the image is pushed to your Docker Hub and available for anyone to download.

$ docker search flashdumper/*

The output of the preceding command will be as shown in the following screenshot:

You can check the same result using a web browser. If you go to https://hub.docker.com/ you should be able to see this httpd image available under your account:

Docker Hub account images

Managing containers using Docker CLI

The next step is to actually run a container from the image we pulled from Docker Hub or a private registry in the previous chapter. We are going to use the docker run command to run a container. Before we do that, let's check if we have any containers running already by using the docker ps command:

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAME

Run a container with the docker run command:

$ docker run httpd

The output of the preceding command will be as shown in the following screenshot:

The container is running, but we cannot leave the terminal and continue working in the foreground. And the only way we can escape it is by sending a TERM signal (Ctrl + C) and killing it.

Docker ps and logs

Run the docker ps command to show that there are no running containers:

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Run docker ps -a to show both running and stopped containers:

$ docker ps -a

The output of the preceding command will be as shown in the following screenshot:

There are a few things to note here. The STATUS field says that container 5e3820a43ffc exited about one minute ago. In order to get container log information, we can use the docker logs command:

$ docker logs 5e3820a43ffc

The output of the preceding command will be as shown in the following screenshot:

The last message says caught SIGTERM, shutting down. It happened after we pressed Ctrl + C. In order to run a container in background mode, we can use the -d option with the docker run command:

$ docker run -d httpd
5d549d4684c8e412baa5e30b20697b72593d87130d383c2273f83b5ceebc4af3

It generates a random ID, the first 12 characters of which are used for the container ID. Along with the generated ID, a random container name is also generated.

Run docker ps to verify the container ID, name, and status:

$ docker ps

The output of the preceding command will be as shown in the following screenshot:

Executing commands inside a container

From the output, we can see that the container status is UP. Now we can execute some commands inside the container using the docker exec command with different options:

$ docker exec -i 00f343906df3 ls -l /
total 12
drwxr-xr-x. 2 root root 4096 Feb 15 04:18 bin
drwxr-xr-x. 2 root root 6 Nov 19 15:32 boot
drwxr-xr-x. 5 root root 360 Mar 6 21:17 dev
drwxr-xr-x. 42 root root 4096 Mar 6 21:17 etc
drwxr-xr-x. 2 root root 6 Nov 19 15:32 home
...
Output truncated for brevity
...

Option -i (--interactive) allows you to run a Docker without dropping inside the container. But we can easily override this behavior and enter this container by using -i and -t (--tty) options (or just -it):

$ docker exec -it 00f343906df3 /bin/bash
root@00f343906df3:/usr/local/apache2#

We should fall into container bash CLI. From here, we can execute other general Linux commands. This trick is very useful for troubleshooting. To exit the container console, just type exit or press Ctrl + D.

Starting and stopping containers

We can also stop and start running containers by running docker stop and docker start commands:

Enter the following command to stop the container:

$ docker stop 00f343906df3
00f343906df3

Enter the following command to start the container:

$ docker start 00f343906df3
00f343906df3

Docker port mapping

In order to actually benefit from the container, we need to make it publicly accessible from the outside. This is where we will need to use the -p option with a few arguments while running the docker run command:

$ docker run -d -p 8080:80 httpd
3b1150b5034329cd9e70f90ee21531b8b1ab1d4a85141fd3a362cd40db80e193

Option -p maps container port 80 to your server port 8080. Verify that you have a httpd container exposed and a web server running:

$ curl localhost:8080
<html><body><h1>It works!</h1></body></html>

Inspecting the Docker container

While the container is running, we can inspect its parameters by using the docker inspect command. The output is provided in JSON format and it gives us a very comprehensive output:

$ docker inspect 00f343906df3
[
{
"Id": "00f343906df3f26c24e02cd61d6a37bbc36106b3b0372073673c2983cb6f",
...
output truncated for brevity
...
}
]

Removing containers

In order to delete a container, you can use the docker rm command. If the container you want to delete is running, you can stop and delete it or use the -f option and it will do the job:

$ docker rm 3b1150b50343
Error response from daemon: You cannot remove a running container 3b1150b5034329cd9e70f90ee21531b8b1ab1d4a85141fd3a362cd40db80e193. Stop the container before attempting removal or force remove

Let's try using -f option.

$ docker rm  -f 3b1150b50343

Another trick you can use to delete all containers, both stopped and running, is the following command:

$ docker rm -f $(docker ps -qa)
830a42f2e727
00f343906df3
5e3820a43ffc
419e7ce2567e

Verify that all the containers are deleted:

$ docker ps  -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Using environment variables

Due to the dynamic and stateless nature of containers, applications cannot rely on either fixed IP addresses or DNS hostnames while communicating with middleware and other application services. Docker lets you store data such as configuration settings, encryption keys, and external resource addresses in environment variables.

Passing environment variables to a container

At runtime, environment variables are exposed to the application inside the container. You can set environment variables in a service's containers with the environment key, just like with docker run -e VARIABLE=VALUE. You can also pass environment variables from your shell straight through to a service's containers with the environment key by not giving them a value, just like with docker run -e VARIABLE.

Environment variables are used to set specific application parameters, like IP addresses, for a server to connect the database server address with login credentials.

Some container startup scripts use environment variables to perform the initial configuration of an application.

For example, a mariadb image is created to use several environment variables to start a container and create users/databases at the start time. This image uses the following important parameters, among others:

Parameter

Description

MYSQL_ROOT_PASSWORD

This variable is mandatory and specifies the password that will be set for the MariaDB root superuser account.

MYSQL_DATABASE

This variable is optional and allows you to specify the name of a database to be created on image startup. If a user/password was supplied (parameters in the row below) then that user will be granted superuser access (corresponding to GRANT ALL) to this database.

MYSQL_USER and MYSQL_PASSWORD

These variables are optional and used in conjunction to create a new user and to set that user's password. This user will be granted superuser permissions for the database specified by the MYSQL_DATABASE variable. Both variables are required for a user to be created.

First, we can try to pull and start a mariadb container without specifying the password/user/database-related information. It will fail since the image expects the parameters. In this example, we are starting a container in the foreground to be able to see all error messages:

$ docker pull mariadb
latest: Pulling from docker.io/library/mariadb
...
output truncated for brevity
...
Digest: sha256:d5f0bc88ba397233677ff75b7b1de693d5e84527ecf2b4f59adebf8d0bcac3c4

Now try to run mariadb container without any options and arguments.

$ docker run mariadb
error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD

The docker run command failed because the MariaDB image initial startup script was not able to find the required variables. This script expects us to have at least the MariaDB root password to start a database server. Let's try to start a database container again by providing all required variables:

$ docker run -d --name mariadb -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=example -e MYSQL_USER=example_user -e MYSQL_PASSWORD=password mariadb
721dc752ed0929dbac4d8666741b15e1f371aefa664e497477b417fcafee06ce

Run the docker ps command to verify that the container is up and running:

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
721dc752ed09 mariadb "docker-entrypoint.sh" 10 seconds ago Up 9 seconds 3306/tcp mariadb

The container was created successfully. Run the verification command to check that example_user has access to the example database:

$ docker exec -it mariadb mysql -uexample_user -ppassword example -e "show databases;"
+--------------------+
| Database |
+--------------------+
| example |
| information_schema |
+--------------------+

The startup script created a user named example_user with the password password as we specified in the environment variables. It also configured a password for the root user. The full list of MariaDB image variables you can specify is located at https://hub.docker.com/_/mariadb/.

Linking containers

Environment variables adjust settings for a single container. The same approach can be used to start a multi-tier application where one container or application works alongside the other:

Multi-tier application example

In a multi-tier application, both the application server container and database server container may need to share variables such as database login credentials. Of course, we can pass all database connectivity settings to the application container using environment variables. It is very easy to make a mistake while passing multiple -e options to the docker run command, and it is very time-consuming, not to mention that it is very ineffective. Another option is to use container IP addresses to establish connections. We can gather IP address information using docker inspect but it will be difficult to track this information in a multi-container environment.

This means that using environment variables is just not enough to build multi-tier applications where containers depend on each other.

Docker has a featured called linked containers to solve this problem. It automatically copies all environment variables from one container to another. Additionally, by linking containers, we can define environment variables based on the other container's IP address and exposed ports.

Using linked containers is done by simply adding the --link container:alias option to the docker run command. For example, the following command links to a container named MariaDB using the DB alias:

$ docker run --link mariadb:db --name my_application  httpd

The new my_application container will then get all variables defined from the linked container mariadb. Those variable names are prefixed by DB_ENV_ so as not to conflict with the new container's own environment variables.

Please be aware that the aliases are all uppercase.

Variables providing information about container IP addresses and ports are named according to the following scheme:

  • {ALIAS}_PORT_{exposed-port}_TCP_ADDR
  • {ALIAS}_PORT_{exposed-port}_TCP_PORT

Continuing with the MariaDB image example, the application container would get the following variables:

  • DB_PORT_3306_TCP_ADDR
  • DB_PORT_3306_TCP_PORT

If the linked container exposes multiple ports, each of them generates a set of environment variables.

Let's take an example. We will be creating a WordPress container which needs access to a database server. This integration will require shared database access credentials. The first step in creating this application is to create a database server:

$ docker rm -f $(docker ps -qa)
$ docker run -d --name mariadb -e MYSQL_ROOT_PASSWORD=wordpress -e MYSQL_DATABASE=wordpress -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=password mariadb
221462288bc578511154fe79411de002e05f08642b63a72bc7a8f16f7102e52b

The next step is to run a WordPress container. In that command, we will link the wordpress container with the mariadb container:

$ docker run -d --name wordpress --link mariadb:mysql -p 8080:80 wordpress
Unable to find image 'wordpress:latest' locally
Trying to pull repository docker.io/library/wordpress ...
latest: Pulling from docker.io/library/wordpress
...
output truncated for brevity
...
Digest: sha256:670e4156377063df1a02f036354c52722de0348d46222ba30ef6a925c24cd46a
1f69aec1cb88d273de499ca7ab1f52131a87103d865e4d64a7cf5ab7b430983a

Let's check container environments with the docker exec command:

$ docker exec -it wordpress env|grep -i mysql
MYSQL_PORT=tcp://172.17.0.2:3306
MYSQL_PORT_3306_TCP=tcp://172.17.0.2:3306
MYSQL_PORT_3306_TCP_ADDR=172.17.0.2
MYSQL_PORT_3306_TCP_PORT=3306
MYSQL_PORT_3306_TCP_PROTO=tcp
...
output truncated for brevity
...
You can see all these variables because the WordPress container startup script handles the mysql link. We can see here that the link set a number of MYSQL_ENV and MYSQL_PORT variables, which are used by the WordPress startup script.

Using persistent storage

In the previous sections, we saw that containers can be created and deleted easily. But when a container is deleted, all the data associated with that container disappears too. That is why a lot of people refer to containers as a stateless architecture. But we can change this behavior and keep all the data by using persistent volumes. In order to enable persistent storage for a Docker container, we need to use the -v option, which binds the container filesystem to the host filesystem that runs that container.

In the next example, we will create a MariaDB container with persistent storage in the /mnt/data folder on the host. Then, we delete the MariaDB container and recreate it again using the same persistent storage.

First, remove all previously created containers:

$ docker rm -f $(docker ps -aq)

We have to prepare persistent storage on the node before we begin. Be aware that we need to give read/write permissions to the persistent storage directory. The MariaDB application works with a MySQL user with UID=999 inside the container. Also, it is important to mention that the special SE Linux security context svirt_sandbox_file_t is required. This can be achieved using the following commands:

# mkdir /mnt/data
# chown 999:999 /mnt/data
# chcon -Rt svirt_sandbox_file_t /mnt/data

The next step is to create the container running the MariaDB service:

$ docker run -d -v /mnt/data:/var/lib/mysql --name mariadb -e MYSQL_ROOT_PASSWORD=password mariadb
41139532924ef461420fbcaaa473d3030d10f853e1c98b6731840b0932973309

Run the docker ps command:

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
41139532924e mariadb "docker-entrypoint.sh" 4 seconds ago Up 3 seconds 3306/tcp mariadb

Create a new database and verify the existence of this new DB:

$ docker exec -it mariadb mysql -uroot -ppassword -e "create database persistent;"

$ docker exec -it mariadb mysql -uroot -ppassword -e "show databases;"
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| persistent |
+--------------------+

Verify that there is new data in the /mnt/data directory created by the mariadb container. This is how we make the data persistent:

$ ls -l /mnt/data/
drwx------. 2 polkitd ssh_keys 4096 Mar 6 16:18 mysql
drwx------. 2 polkitd ssh_keys 20 Mar 6 16:18 performance_schema
drwx------. 2 polkitd ssh_keys 20 Mar 6 16:23 persistent
...
output truncated for brevity
...

Delete the mariadb container and verify that all files will be kept:

$ docker rm -f mariadb
mariadb

$ ls -l /mnt/data/
drwx------. 2 polkitd ssh_keys 4096 Mar 6 16:18 mysql
drwx------. 2 polkitd ssh_keys 20 Mar 6 16:18 performance_schema
drwx------. 2 polkitd ssh_keys 20 Mar 6 16:23 persistent
...
output truncated for brevity
...

We are going to rerun the container and verify whether the previously created database persistent survived container removal and creation:

$ docker run -d -v /mnt/data:/var/lib/mysql --name mariadb mariadb
c12292f089ccbe294cf3b9a80b9eb44e33c1493570415109effa7f397579b235

As you can see, the database with the name persistent is still here.

Remove all the containers before you proceed to the next section:

$ docker rm -f $(docker ps -aq)

Creating a custom Docker image

The Docker community has Docker images for most popular software applications. These include, for example, images for web servers (Apache, Nginx, and so on), enterprise application platforms (JBoss EAP, Tomcat), images with programming languages (Perl, PHP, Python), and so on.

In most cases, you do not need to build your own Docker images to run standard software. But if you have a business need that requires having a custom application, you probably need to create your own Docker image.

There are a number of ways to create a new docker image:

  • Commit: Creating a Docker image from a running container. Docker allows you to convert a working container to a Docker image using the docker commit command. This means that image layers will be stored as a separate docker image. This approach is the easiest way to create a new image.
  • Import/Export: This is similar to the first one but uses another Docker command. Running container layers will be saved to a filesystem using docker export and then the image will be recreated using docker import. We do not recommend this method for creating a new image since the first one is simpler.
  • Dockerfile: Building a Docker image using a Dockerfile. Dockerfile is a plain text file that contains a number of steps sometimes called instructions. These instructions can run a particular command inside a container or copy files to a container. A user can initiate a build process using Dockerfile and the Docker daemon will run all instructions in the Dockerfile in a temporary container. Then this container is converted to a docker image. This is the most common way to create a new docker image. Building custom docker images from Dockerfile will be described in details in a later chapter.
  • From scratch: Building a base Docker image. In the two previous methods, Docker images are created using Docker images, and these docker images were created from a base Docker image. You cannot modify this base image unless you create one yourself. If you want to know what is inside your image, you might want to create a base image instead. There are two ways to do so:
    • Create a base image layer using the tar command.
    • Use special Dockerfile instructions (from scratch). Both methods will be described in later chapters.

Customizing images using docker commit

The general recommendation is that all Docker images should be built from a Dockerfile to create clean and proper image layers without unwanted temporary and log files, despite the fact that some vendors deliver their Docker images without an available Dockerfile . If there is a need to modify that existing image, you can use the standard docker commit functionality to convert an existing container to a new image.

As an example, we will try to modify our existing httpd container and make an image from it.

First, we need to get the httpd image:

$ docker pull httpd
Using default tag: latest
Trying to pull repository docker.io/library/httpd ...
latest: Pulling from docker.io/library/httpd
...
output truncated for brevity
...
Digest: sha256:6e61d60e4142ea44e8e69b22f1e739d89e1dc8a2764182d7eecc83a5bb31181e

Next, we need a container to be running. That container will be used as a template for a future image

$ docker run -d --name httpd httpd
c725209cf0f89612dba981c9bed1f42ac3281f59e5489d41487938aed1e47641

Now we can connect to the container and modify its layers. As an example, we will update index.html:

$ docker exec -it httpd /bin/sh
# echo "This is a custom image" > htdocs/index.html
# exit

Let's see the changes we made using the docker diff command. This command shows you all files that were modified from the original image. The output looks like this:

$ docker diff httpd
C /usr
C /usr/local
C /usr/local/apache2
C /usr/local/apache2/htdocs
C /usr/local/apache2/htdocs/index.html
...
output truncated for brevity
...

The following table shows the file states of the docker diff command:

Symbol

Description

A

A file or directory was added

D

A file or directory was deleted

C

A file or directory was changed

In our case, docker diff httpd command shows that index.html was changed.

Create a new image from the running container:

$ docker commit httpd custom_image
sha256:ffd3a523f9848776d65de8302253de9dc78e4948a792569ee46fad5c099312f6

Verify that the new image has been created:

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
custom_image latest ffd3a523f984 3 seconds ago 177.4 MB
docker.io/httpd latest 01154c38b473 2 weeks ago 177.4 MB

The final step is to verify that the image works properly:

$ docker run -d --name custom_httpd -p 80:8080 custom_image
78fc5731d62e5a6377a7de152c0ba25d350603e6d97fa26967e06a82c8257e71 $ curl localhost:8080
This is a custom image

Using Dockerfile build

Usually, those who use Docker containers expect to have a high-level of automation, and the docker commit command is difficult to automate. Luckily, Docker can build images automatically by reading instructions from a special file usually called a Dockerfile. A Dockerfile is a text document that contains all the commands a user can call on the command line to assemble an image. Using docker build, users can create an automated build that executes several command-line instructions in succession. On CentOS 7, you can learn a lot more using the Dockerfile built-in documentation page man Dockerfile.

A Dockerfile has a number of instructions that help Docker to build an image according to your requirements. Here is a Dockerfile example, which allows us to achieve the same result as in the previous section:

$ cat Dockerfile
FROM httpd
RUN echo "This is a custom image" > /usr/local/apache2/htdocs/index.html

Once this Dockerfile is created, we can build a custom image using the docker build command:

$ docker build -t custom_image2 . 
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM httpd
---> 01154c38b473
Step 2 : RUN echo "This is a custom image" > /usr/local/apache2/htdocs/index.html
---> Using cache
---> 6b9be8efcb3a
Successfully built 6b9be8efcb3a
Please note that the . at the end of the first line is important as it specifies the working directory. Alternatively, you can use ./ or even $(pwd). So the full commands are going to be:

docker build -t custom_image2 .
or

docker build -t custom_image2 ./
or

docker build -t custom_image2 $(pwd)
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
custom_image2 latest 6b9be8efcb3a 2 minutes ago 177.4 MB
custom_image latest ffd3a523f984 19 minutes ago 177.4 MB
docker.io/httpd latest 01154c38b473 2 weeks ago 177.4 MB

Using Docker history

We can check the history of image modifications using docker history:

$ docker history custom_image2
IMAGE CREATED CREATED BY SIZE COMMENT
6b9be8efcb3a 21 hours ago /bin/sh -c echo "This is a custom image" > /u 23 B
01154c38b473 2 weeks ago /bin/sh -c #(nop) CMD ["httpd-foreground"] 0 B
...
output truncated for brevity
...

Note that a new layer, 6b9be8efcb3a, is added. This is where we change the content of the index.html file in comparison to the original httpd image.

Dockerfile instructions

Some Dockerfile instructions are shown in the table:

Instruction

Description and examples

FROM image[:tag]

It sets the base image used in the build process.

Examples:

FROM httpd

FROM httpd:2.2

RUN <command> <parameters>

The RUN instruction executes any commands in a new layer on top of the current image and commits the results.

Examples:

RUN yum install -y httpd &&\

echo "custom answer" >/var/www/html/index.html

RUN ["command", "param1", "param2"]

This is the same as the last one but in Docker format.

COPY <src> <dst>

The COPY instruction copies new files from <src> and adds them to the filesystem of the container at the path <dest>. The <src> must be the path to a file or directory relative to the source directory that is being built (the context of the build) or a remote file URL.

Examples:

COPY index.html /var/www/html/index.html

ENTRYPOINT ["executable", "param1", "param2"]

An ENTRYPOINT helps you configure a container that can be run as an executable. When you specify an ENTRYPOINT, the whole container runs as if it were only that executable.

Examples:

ENTRYPOINT ["/usr/sbin/httpd","-D","FOREGROUND"]

In most cases the default value of ENTRYPOINT is /bin/sh -c, which means that CMD will be interpreted as a command to run

EXPOSE <port>

This instruction informs a Docker daemon that an application will be listening on this port at runtime. This is not very useful when working with standalone Docker containers because port publishing is performed via the -p argument of the CLI, but it is used by OpenShift when creating a service for a new application deployed from a Docker image and by Docker itself when exporting default environment variables inside a container.

CMD ["executable", "param1", "param2"]


Provides arguments to an ENTRYPOINT command and can be overridden at runtime with the docker run command.

Example:

CMD ["/usr/sbin/httpd","-D","FOREGROUND"]

When the docker build command is run, Docker reads the provided Dockerfile from top to bottom, creating a separate layer for every instruction and placing it in the internal cache. If an instruction from Dockerfile is updated, it invalidates the respective caching layer and every subsequent one, forcing Docker to rebuild them when the docker build command is run again. Therefore, it's more effective to place the most malleable instructions at the end of Dockerfile, so that the number of invalidated layers is minimized and cache usage is maximized. For example, suppose we have a Dockerfile with the following contents:

$ cat Dockerfile
FROM centos:latest
RUN yum -y update
RUN yum -y install nginx, mariadb, php5, php5-mysql
RUN yum -y install httpd
CMD ["nginx", "-g", "daemon off;"]

In the example, if you choose to use MySQL instead of MariaDB, the layer created by the second RUN command, as well as the third one, will be invalidated, which for complex images means a noticeably longer build process.

Consider the following example. Docker includes images for minimal OSes. These base images can be used to build custom images on top of them. In the example, we will be using a CentOS 7 base image to create a web server container from scratch:

  1. First, we need to create a project directory:
$ mkdir custom_project; cd custom_project

Then, we create a Dockerfile with the following content:

$ cat Dockerfile
FROM centos:7
RUN yum install httpd -y
COPY index.html /var/www/html/index.html
ENTRYPOINT ["/usr/sbin/httpd","-D","FOREGROUND"]
  1. Create the index.html file:
$ echo "A new cool image" > index.html
  1. Build the image using docker build:
$ docker build -t new_httpd_image .
Sending build context to Docker daemon 3.072 kB
...
output truncated for brevity
...
Successfully built 4f2f77cd3026
  1. Finally, we can check that the new image exists and has all the required image layers:
$ docker history new_httpd_image
IMAGE CREATED CREATED BY SIZE COMMENT
4f2f77cd3026 20 hours ago /bin/sh -c #(nop) ENTRYPOINT ["/usr/sbin/htt 0 B
8f6eaacaae3c 20 hours ago /bin/sh -c #(nop) COPY file:318d7f73d4297ec33 17 B
e19d80cc688a 20 hours ago /bin/sh -c yum install httpd -y 129 MB
...
output truncated for brevity
...
The top three layers are the instructions we added in the Dockerfile.

Summary

In this chapter, we have discussed container architecture, worked with Docker images and containers, examined different Docker registries, learned how to manage persistent storage for containers, and finally looked at how to build a Docker image with Dockerfile. All these skills will be required in Chapter 3, CRI-O Overview, where we start working with Kubernetes. Kubernetes is an essential and critical OpenShift component. It all works like a snowball: Docker skills are required by Kubernetes, and Kubernetes skills are required by Openshift.

In the next chapter, we are going to work with Kubernetes. Kubernetes is an industry-standard orchestration layer for Docker containers. This is where you are going to install and run some basic Docker containers using Kubernetes.

Questions

  1. What are the three main Docker components? choose one:
    1. Docker Container, Docker Image, Docker Registry
    2. Docker Hub, Docker Image, Docker Registry
    3. Docker Runtime, Docker Image, Docker Hub
    4. Docker Container, Docker Image, Docker Hub
  2. Choose two valid registry types:
    1. Personal Registry
    2. Private Registry
    3. Public Registry
    4. Security Registry
  3. The main purpose of Docker Persistent Storage is to make sure that an application data is saved if a container dies:
    1. True
    2. False
  4. What Linux feature controls resource limitations for a Docker container? choose one:
    1. Cgroups
    2. Namespaces
    3. SELinux
    4. chroot
  5. What commands can be used to build a custom image from a Dockerfile? choose two:
    1. docker build -t new_httpd_image .
    2. docker build -t new_httpd_image .\
    3. docker build -t new_httpd_image ($pwd)
    4. docker build -t new_httpd_image ./
  6. The docker commit command saves Docker images to an upstream repository:
    1. True
    2. False

Further reading

Since we are covering the very basics of Docker containers, you may be interested in diving into specific topics. Here's a list of links that may be helpful to look through to learn more about Docker and containers in general:

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • •Gain hands-on experience of working with Kubernetes and Docker
  • •Learn how to deploy and manage applications in OpenShift
  • •Get a practical approach to managing applications on a cloud-based platform
  • •Explore multi-site and HA architectures of OpenShift for production

Description

Docker containers transform application delivery technologies to make them faster and more reproducible, and to reduce the amount of time wasted on configuration. Managing Docker containers in the multi-node or multi-datacenter environment is a big challenge, which is why container management platforms are required. OpenShift is a new generation of container management platforms built on top of both Docker and Kubernetes. It brings additional functionality to the table, something that is lacking in Kubernetes. This new functionality significantly helps software development teams to bring software development processes to a whole new level. In this book, we’ll start by explaining the container architecture, Docker, and CRI-O overviews. Then, we'll look at container orchestration and Kubernetes. We’ll cover OpenShift installation, and its basic and advanced components. Moving on, we’ll deep dive into concepts such as deploying application OpenShift. You’ll learn how to set up an end-to-end delivery pipeline while working with applications in OpenShift as a developer or DevOps. Finally, you’ll discover how to properly design OpenShift in production environments. This book gives you hands-on experience of designing, building, and operating OpenShift Origin 3.9, as well as building new applications or migrating existing applications to OpenShift.

Who is this book for?

The book is for system administrators, DevOps engineers, solutions architects, or any stakeholder who wants to understand the concept and business value of OpenShift.

What you will learn

  • •Understand the core concepts behind containers and container orchestration tools
  • •Understand Docker, Kubernetes, and OpenShift, and their relation to CRI-O
  • •Install and work with Kubernetes and OpenShift
  • •Understand how to work with persistent storage in OpenShift
  • •Understand basic and advanced components of OpenShift, including security and networking
  • •Manage deployment strategies and application's migration in OpenShift
  • •Understand and design OpenShift high availability
Estimated delivery fee Deliver to Australia

Economy delivery 7 - 10 business days

AU$19.95

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 30, 2018
Length: 504 pages
Edition : 1st
Language : English
ISBN-13 : 9781788992329
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Australia

Economy delivery 7 - 10 business days

AU$19.95

Product Details

Publication date : Jul 30, 2018
Length: 504 pages
Edition : 1st
Language : English
ISBN-13 : 9781788992329
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
AU$24.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
AU$249.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts
AU$349.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total AU$ 204.97
Getting Started with Kubernetes
AU$67.99
Learn OpenShift
AU$60.99
Learn Ansible
AU$75.99
Total AU$ 204.97 Stars icon
Banner background image

Table of Contents

23 Chapters
Containers and Docker Overview Chevron down icon Chevron up icon
Kubernetes Overview Chevron down icon Chevron up icon
CRI-O Overview Chevron down icon Chevron up icon
OpenShift Overview Chevron down icon Chevron up icon
Building an OpenShift Lab Chevron down icon Chevron up icon
OpenShift Installation Chevron down icon Chevron up icon
Managing Persistent Storage Chevron down icon Chevron up icon
Core OpenShift Concepts Chevron down icon Chevron up icon
Advanced OpenShift Concepts Chevron down icon Chevron up icon
Security in OpenShift Chevron down icon Chevron up icon
Managing OpenShift Networking Chevron down icon Chevron up icon
Deploying Simple Applications in OpenShift Chevron down icon Chevron up icon
Deploying Multi-Tier Applications Using Templates Chevron down icon Chevron up icon
Building Application Images from Dockerfile Chevron down icon Chevron up icon
Building PHP Applications from Source Code Chevron down icon Chevron up icon
Building a Multi-Tier Application from Source Code Chevron down icon Chevron up icon
CI/CD Pipelines in OpenShift Chevron down icon Chevron up icon
OpenShift HA Architecture Overview Chevron down icon Chevron up icon
OpenShift HA Design for Single and Multiple DCs Chevron down icon Chevron up icon
Network Design for OpenShift HA Chevron down icon Chevron up icon
What is New in OpenShift 3.9? Chevron down icon Chevron up icon
Assessments Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
(2 Ratings)
5 star 50%
4 star 0%
3 star 0%
2 star 0%
1 star 50%
Amazon Customer Nov 07, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book has the enormous advantage of covering in a single volume all the basic concepts (docker then k8s) before reaching the Openshift part. It also provides the reader with a detailed understanding of the foundations as well as of the top layer. And when you link the explanations with easy to use hands-on exercises it just makes all of it a very valuable addition to the technical library of those who want to become better and more knowledgeable about this new IT trend.I hope the authors will take the time to refresh the current edition as Openshift moves forward.Keep up the good work
Amazon Verified review Amazon
Vik Feb 03, 2019
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
E.g. 1 page 122 - the concept of labels is explained using selectors when the latter has not been introduced yet. E.g. 2 page 124- the word "uneven" is used to mean an "odd" number when talking about the recommended no. of etcd service.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela