Chapter 8. Working with Containers
In this chapter, we will cover the following recipes:
- Installing LXD, the Linux container daemon
- Deploying your first container with LXD
- Managing LXD containers
- Managing LXD containers – advanced options
- Setting resource limits on LXD containers
- Networking with LXD
- Installing Docker
- Starting and managing Docker containers
- Creating images with a Dockerfile
- Understanding Docker volumes
- Deploying WordPress using a Docker network
- Monitoring Docker containers
- Securing Docker containers
Introduction
Containers are quite an old technology and existed in the form of chroot and FreeBSD Jails. Most of us have already used containers in some form or other. The rise of Docker gave containers the required adoption and popularity. Ubuntu has also released a new tool named LXD with Ubuntu 15.04.
A container is a lightweight virtual environment that contains a process or set of processes. You might already have used containers with chroot. Just as with containers, we create an isolated virtual environment to group and isolate a set of processes. The processes running inside the container are isolated from the base operating system environment, as well as other containers running on the same host. Such processes cannot access or modify anything outside the container. A recent development in the Linux kernel to support namespaces and cgroups has enabled containers to provide better isolation and resource-management capabilities.
One of the reasons for the widespread adoption of containers is the difference between containers and hypervisor-based virtualization, and the inefficiencies associated with virtual machines. A VM requires its own kernel, whereas containers share the kernel with the host, resulting in a fast and lightweight isolated environment. Sharing the kernel removes much of the overhead of VMs and improves resource utilization, as processes communicate with a single shared kernel. You can think of containers as OS-level virtualization.
With containers, the entire application can be started within milliseconds, compared to virtual minutes. Additionally, the image size becomes much smaller, resulting in easier and faster cloud deployments. The shared operating system results in smaller footprints, and saved resources can be used to run additional containers on the same host. It is normal to run hundreds of containers on your laptop.
However, containerization also has its own shortcomings. First, you cannot run cross-platform containers. That is, containers must use the same kernel as the host. You cannot run Windows containers on a Linux host, and vice versa. Second, the isolation and security is not as strong as hypervisor-based virtualization. Containers are largely divided into two categories: OS containers and application containers. As the name suggests, application containers are designed to host a single service or application. Docker is an application container. You can still run multiple processes in Docker, but it is designed to host a single process.
OS containers, on the other hand, can be compared to virtual machines. They provide user space isolation. You can install and run multiple applications and run multiple processes inside OS containers. LXC on Linux and Jails on BSD are examples of OS containers.
In this chapter, we will take a look at LXC, an OS container, and Docker, an application container. In the first part of the chapter, we will learn how to install LXC and deploy a containerized virtual machine. In subsequent recipes, we will work with Docker and related technologies. We will learn to create and deploy a container with Docker.
Installing LXD, the Linux container daemon
LXC is a system built on the modern Linux kernel and enables the creation and management of virtual Linux systems or containers. As discussed earlier, LXC is not a full virtualization system and shares the kernel with the host operating system, providing lightweight containerization. LXC uses Linux namespaces to separate and isolate the processes running inside containers. This provides much better security than simple chroot-based filesystem isolation. These containers are portable and can easily be moved to another system with a similar processor architecture.
Ubuntu 15.04 unveiled a new tool named LXD, which is a wrapper around LXC. The official page calls it a container hypervisor and a new user experience for LXC. Ubuntu 16.04 comes preinstalled with its latest stable release, LXD 2.0. With LXD, you no longer need to work directly with lower-level LXC tools.
LXD adds some important features to LXC containers. First, it runs unprivileged containers by default, resulting in improved security and better isolation for containers. Second, LXD can manage multiple LXC hosts and can be used as an orchestration tool. It also supports the live migration of containers across hosts.
LXD provides a central daemon named lxd and a command-line client named lxc
. Containers can be managed with the command-line client or the REST APIs provided by the LXD daemon. It also provides an OpenStack plugin, nova-compute-lxd, to deploy containers on the OpenStack cloud.
In this recipe, we will learn to install and configure the LXD daemon. This will set up a base for the next few recipes in this chapter.
Getting ready
You will need access to the root account or an account with sudo
privileges.
Make sure that you have enough free space available on disk.
How to do it…
Ubuntu 16.04 ships with the latest release of LXD preinstalled. We just need to initialize the LXD daemon to set the basic settings.
- First, update the
apt
cache and try to install LXD. This should install updates to the LXD package, if any:$ sudo apt-get update $ sudo apt-get install lxd
Tip
If you are using Ubuntu 14.04, you can install LXD using the following command:
$ sudo apt-get -t trusty-backports install lxd
- Along with LXD, we will need one more package named ZFS—the most important addition to Ubuntu 16.04. We will be using ZFS as a storage backend for LXD:
$ sudo apt-get install zfsutils-linux
- Once LXD has been installed, we need to configure the daemon before we start using it. Use
lxd init
to start the initialization process. This will ask some questions about the LXD configuration:$ sudo lxd init Name of the storage backend to use (dir or zfs): zfs Create a new ZFS pool (yes/no)? yes Name of the new ZFS pool: lxdpool Would you like to use an existing block device (yes/no)? no Size in GB of the new loop device (1GB minimum): 10 Would you like LXD to be available over the network (yes/no)? no Do you want to configure the LXD bridge (yes/no)? yes Warning: Stopping lxd.service, but it can still be activated by: lxd.socket LXD has been successfully configured.
Now, we have our LXD setup configured and ready to use. In the next recipe, we will start our first container with LXD.
How it works…
Ubuntu 16.04 comes preinstalled with LXD and makes it even easier to start with system containers or operating system virtualization. In addition to LXD, Ubuntu now ships with inbuilt support for ZFS (OpenZFS), a filesystem with support for various features that improve the containerization experience. With ZFS, you get faster clones and snapshots with copy-on-write, data compression, disk quotas, and automated filesystem repairs.
LXD is a wrapper around lower-level LXC or Linux containers. It provides the REST API for communicating and managing LXC components. LXD runs as a central daemon and adds some important features, such as dynamic resource restrictions and live migrations between multiple hosts. Containers started with LXD are unprivileged containers by default, resulting in improved security and isolation.
This recipe covers the installation and initial configuration of the LXD daemon. As mentioned previously, LXD comes preinstalled with Ubuntu 16. The installation commands should fetch updates to LXD, if any. We have also installed zfsutils-linux
, a user space package to interact with ZFS. After the installation, we initialized the LXD daemon to set basic configuration parameters, such as the default storage backend and network bridge for our containers.
We selected ZFS as the default storage backend and created a new ZFS pool called lxdpool
, backed by a simple loopback device. In a production environment, you should opt for a physical device or separate partition. If you have already created a ZFS pool, you can directly name it by choosing no
for Create new ZFS pool
. To use a separate storage device or partition, choose yes
when asked for block storage.
Tip
Use the following commands to get ZFS on Ubuntu 14.04:
$ sudo apt-add-repository ppa:zfs-native/stable $ sudo apt-get update && sudo apt-get install ubuntu-zfs
ZFS is the recommended storage backend, but LXD also works with various other options, such as Logical Volume Manager (LVM) and btrfs (pronounced "butter F S"), that offer nearly the same features as ZFS or a simple directory-based storage system.
Next, you can choose to make LXD available on the network. This is necessary if you are planning a multi-host setup and support for migration. The initialization also offers to configure the lxdbr0
bridge interface, which will be used by all containers. By default, this bridge is configured with IPv6 only. Containers created with the default configuration will have their veth0
virtual Ethernet adapter attached to lxdbr0
through a NAT network. This is the gateway for containers to communicate with the outside world. LXD also installs a local DHCP server and the dnsmasq
package. DHCP is used to assign IP addresses to containers, and dnsmasq
acts as a local name-resolution service.
If you misplace the network bridge configuration or need to update it, you can use the following command to get to the network configuration screen:
$ sudo dpkg-reconfigure -p medium lxd
There's more…
The LXD 2.0 version, which ships with Ubuntu 16, is an LTS version. If you want to get your hands on the latest release, then you can install stable versions from the following repository:
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable
For development releases, change the PPA to ppa:ubuntu-lxc/lxd-git-master
.
For more information, visit the LXC download page at https://linuxcontainers.org/lxc/downloads/.
If you still want to install LXC, you can. Use the following command:
$ sudo apt-get install lxc
This will install the required user space package and all the commands necessary to work directly with LXC. Note that all LXC commands are prefixed with lxc-
, for example, lxc-create
and lxc-info
. To get a list of all commands, type lxc-
in your terminal and press Tab twice.
See also
- For more information, check the LXD page of the Ubuntu Server guide: https://help.ubuntu.com/lts/serverguide/lxd.html
- The LXC blog post series is at https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-series/
- The LXD 2.0 blog post series is at https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
- Ubuntu 16.04 switched to Systemd, which provides its own container framework, systemd-nspawn; read more about systemd containers on its Ubuntu man page at http://manpages.ubuntu.com/manpages/xenial/man1/systemd-nspawn.1.html
- See how to get started with systemd containers at https://community.flockport.com/topic/32/systemd-nspawn-containers
Getting ready
You will need access to the root account or an account with sudo
privileges.
Make sure that you have enough free space available on disk.
How to do it…
Ubuntu 16.04 ships with the latest release of LXD preinstalled. We just need to initialize the LXD daemon to set the basic settings.
- First, update the
apt
cache and try to install LXD. This should install updates to the LXD package, if any:$ sudo apt-get update $ sudo apt-get install lxd
Tip
If you are using Ubuntu 14.04, you can install LXD using the following command:
$ sudo apt-get -t trusty-backports install lxd
- Along with LXD, we will need one more package named ZFS—the most important addition to Ubuntu 16.04. We will be using ZFS as a storage backend for LXD:
$ sudo apt-get install zfsutils-linux
- Once LXD has been installed, we need to configure the daemon before we start using it. Use
lxd init
to start the initialization process. This will ask some questions about the LXD configuration:$ sudo lxd init Name of the storage backend to use (dir or zfs): zfs Create a new ZFS pool (yes/no)? yes Name of the new ZFS pool: lxdpool Would you like to use an existing block device (yes/no)? no Size in GB of the new loop device (1GB minimum): 10 Would you like LXD to be available over the network (yes/no)? no Do you want to configure the LXD bridge (yes/no)? yes Warning: Stopping lxd.service, but it can still be activated by: lxd.socket LXD has been successfully configured.
Now, we have our LXD setup configured and ready to use. In the next recipe, we will start our first container with LXD.
How it works…
Ubuntu 16.04 comes preinstalled with LXD and makes it even easier to start with system containers or operating system virtualization. In addition to LXD, Ubuntu now ships with inbuilt support for ZFS (OpenZFS), a filesystem with support for various features that improve the containerization experience. With ZFS, you get faster clones and snapshots with copy-on-write, data compression, disk quotas, and automated filesystem repairs.
LXD is a wrapper around lower-level LXC or Linux containers. It provides the REST API for communicating and managing LXC components. LXD runs as a central daemon and adds some important features, such as dynamic resource restrictions and live migrations between multiple hosts. Containers started with LXD are unprivileged containers by default, resulting in improved security and isolation.
This recipe covers the installation and initial configuration of the LXD daemon. As mentioned previously, LXD comes preinstalled with Ubuntu 16. The installation commands should fetch updates to LXD, if any. We have also installed zfsutils-linux
, a user space package to interact with ZFS. After the installation, we initialized the LXD daemon to set basic configuration parameters, such as the default storage backend and network bridge for our containers.
We selected ZFS as the default storage backend and created a new ZFS pool called lxdpool
, backed by a simple loopback device. In a production environment, you should opt for a physical device or separate partition. If you have already created a ZFS pool, you can directly name it by choosing no
for Create new ZFS pool
. To use a separate storage device or partition, choose yes
when asked for block storage.
Tip
Use the following commands to get ZFS on Ubuntu 14.04:
$ sudo apt-add-repository ppa:zfs-native/stable $ sudo apt-get update && sudo apt-get install ubuntu-zfs
ZFS is the recommended storage backend, but LXD also works with various other options, such as Logical Volume Manager (LVM) and btrfs (pronounced "butter F S"), that offer nearly the same features as ZFS or a simple directory-based storage system.
Next, you can choose to make LXD available on the network. This is necessary if you are planning a multi-host setup and support for migration. The initialization also offers to configure the lxdbr0
bridge interface, which will be used by all containers. By default, this bridge is configured with IPv6 only. Containers created with the default configuration will have their veth0
virtual Ethernet adapter attached to lxdbr0
through a NAT network. This is the gateway for containers to communicate with the outside world. LXD also installs a local DHCP server and the dnsmasq
package. DHCP is used to assign IP addresses to containers, and dnsmasq
acts as a local name-resolution service.
If you misplace the network bridge configuration or need to update it, you can use the following command to get to the network configuration screen:
$ sudo dpkg-reconfigure -p medium lxd
There's more…
The LXD 2.0 version, which ships with Ubuntu 16, is an LTS version. If you want to get your hands on the latest release, then you can install stable versions from the following repository:
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable
For development releases, change the PPA to ppa:ubuntu-lxc/lxd-git-master
.
For more information, visit the LXC download page at https://linuxcontainers.org/lxc/downloads/.
If you still want to install LXC, you can. Use the following command:
$ sudo apt-get install lxc
This will install the required user space package and all the commands necessary to work directly with LXC. Note that all LXC commands are prefixed with lxc-
, for example, lxc-create
and lxc-info
. To get a list of all commands, type lxc-
in your terminal and press Tab twice.
See also
- For more information, check the LXD page of the Ubuntu Server guide: https://help.ubuntu.com/lts/serverguide/lxd.html
- The LXC blog post series is at https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-series/
- The LXD 2.0 blog post series is at https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
- Ubuntu 16.04 switched to Systemd, which provides its own container framework, systemd-nspawn; read more about systemd containers on its Ubuntu man page at http://manpages.ubuntu.com/manpages/xenial/man1/systemd-nspawn.1.html
- See how to get started with systemd containers at https://community.flockport.com/topic/32/systemd-nspawn-containers
How to do it…
Ubuntu 16.04 ships with the latest release of LXD preinstalled. We just need to initialize the LXD daemon to set the basic settings.
- First, update the
apt
cache and try to install LXD. This should install updates to the LXD package, if any:$ sudo apt-get update $ sudo apt-get install lxd
Tip
If you are using Ubuntu 14.04, you can install LXD using the following command:
$ sudo apt-get -t trusty-backports install lxd
- Along with LXD, we will need one more package named ZFS—the most important addition to Ubuntu 16.04. We will be using ZFS as a storage backend for LXD:
$ sudo apt-get install zfsutils-linux
- Once LXD has been installed, we need to configure the daemon before we start using it. Use
lxd init
to start the initialization process. This will ask some questions about the LXD configuration:$ sudo lxd init Name of the storage backend to use (dir or zfs): zfs Create a new ZFS pool (yes/no)? yes Name of the new ZFS pool: lxdpool Would you like to use an existing block device (yes/no)? no Size in GB of the new loop device (1GB minimum): 10 Would you like LXD to be available over the network (yes/no)? no Do you want to configure the LXD bridge (yes/no)? yes Warning: Stopping lxd.service, but it can still be activated by: lxd.socket LXD has been successfully configured.
Now, we have our LXD setup configured and ready to use. In the next recipe, we will start our first container with LXD.
How it works…
Ubuntu 16.04 comes preinstalled with LXD and makes it even easier to start with system containers or operating system virtualization. In addition to LXD, Ubuntu now ships with inbuilt support for ZFS (OpenZFS), a filesystem with support for various features that improve the containerization experience. With ZFS, you get faster clones and snapshots with copy-on-write, data compression, disk quotas, and automated filesystem repairs.
LXD is a wrapper around lower-level LXC or Linux containers. It provides the REST API for communicating and managing LXC components. LXD runs as a central daemon and adds some important features, such as dynamic resource restrictions and live migrations between multiple hosts. Containers started with LXD are unprivileged containers by default, resulting in improved security and isolation.
This recipe covers the installation and initial configuration of the LXD daemon. As mentioned previously, LXD comes preinstalled with Ubuntu 16. The installation commands should fetch updates to LXD, if any. We have also installed zfsutils-linux
, a user space package to interact with ZFS. After the installation, we initialized the LXD daemon to set basic configuration parameters, such as the default storage backend and network bridge for our containers.
We selected ZFS as the default storage backend and created a new ZFS pool called lxdpool
, backed by a simple loopback device. In a production environment, you should opt for a physical device or separate partition. If you have already created a ZFS pool, you can directly name it by choosing no
for Create new ZFS pool
. To use a separate storage device or partition, choose yes
when asked for block storage.
Tip
Use the following commands to get ZFS on Ubuntu 14.04:
$ sudo apt-add-repository ppa:zfs-native/stable $ sudo apt-get update && sudo apt-get install ubuntu-zfs
ZFS is the recommended storage backend, but LXD also works with various other options, such as Logical Volume Manager (LVM) and btrfs (pronounced "butter F S"), that offer nearly the same features as ZFS or a simple directory-based storage system.
Next, you can choose to make LXD available on the network. This is necessary if you are planning a multi-host setup and support for migration. The initialization also offers to configure the lxdbr0
bridge interface, which will be used by all containers. By default, this bridge is configured with IPv6 only. Containers created with the default configuration will have their veth0
virtual Ethernet adapter attached to lxdbr0
through a NAT network. This is the gateway for containers to communicate with the outside world. LXD also installs a local DHCP server and the dnsmasq
package. DHCP is used to assign IP addresses to containers, and dnsmasq
acts as a local name-resolution service.
If you misplace the network bridge configuration or need to update it, you can use the following command to get to the network configuration screen:
$ sudo dpkg-reconfigure -p medium lxd
There's more…
The LXD 2.0 version, which ships with Ubuntu 16, is an LTS version. If you want to get your hands on the latest release, then you can install stable versions from the following repository:
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable
For development releases, change the PPA to ppa:ubuntu-lxc/lxd-git-master
.
For more information, visit the LXC download page at https://linuxcontainers.org/lxc/downloads/.
If you still want to install LXC, you can. Use the following command:
$ sudo apt-get install lxc
This will install the required user space package and all the commands necessary to work directly with LXC. Note that all LXC commands are prefixed with lxc-
, for example, lxc-create
and lxc-info
. To get a list of all commands, type lxc-
in your terminal and press Tab twice.
See also
- For more information, check the LXD page of the Ubuntu Server guide: https://help.ubuntu.com/lts/serverguide/lxd.html
- The LXC blog post series is at https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-series/
- The LXD 2.0 blog post series is at https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
- Ubuntu 16.04 switched to Systemd, which provides its own container framework, systemd-nspawn; read more about systemd containers on its Ubuntu man page at http://manpages.ubuntu.com/manpages/xenial/man1/systemd-nspawn.1.html
- See how to get started with systemd containers at https://community.flockport.com/topic/32/systemd-nspawn-containers
How it works…
Ubuntu 16.04 comes preinstalled with LXD and makes it even easier to start with system containers or operating system virtualization. In addition to LXD, Ubuntu now ships with inbuilt support for ZFS (OpenZFS), a filesystem with support for various features that improve the containerization experience. With ZFS, you get faster clones and snapshots with copy-on-write, data compression, disk quotas, and automated filesystem repairs.
LXD is a wrapper around lower-level LXC or Linux containers. It provides the REST API for communicating and managing LXC components. LXD runs as a central daemon and adds some important features, such as dynamic resource restrictions and live migrations between multiple hosts. Containers started with LXD are unprivileged containers by default, resulting in improved security and isolation.
This recipe covers the installation and initial configuration of the LXD daemon. As mentioned previously, LXD comes preinstalled with Ubuntu 16. The installation commands should fetch updates to LXD, if any. We have also installed zfsutils-linux
, a user space package to interact with ZFS. After the installation, we initialized the LXD daemon to set basic configuration parameters, such as the default storage backend and network bridge for our containers.
We selected ZFS as the default storage backend and created a new ZFS pool called lxdpool
, backed by a simple loopback device. In a production environment, you should opt for a physical device or separate partition. If you have already created a ZFS pool, you can directly name it by choosing no
for Create new ZFS pool
. To use a separate storage device or partition, choose yes
when asked for block storage.
Tip
Use the following commands to get ZFS on Ubuntu 14.04:
$ sudo apt-add-repository ppa:zfs-native/stable $ sudo apt-get update && sudo apt-get install ubuntu-zfs
ZFS is the recommended storage backend, but LXD also works with various other options, such as Logical Volume Manager (LVM) and btrfs (pronounced "butter F S"), that offer nearly the same features as ZFS or a simple directory-based storage system.
Next, you can choose to make LXD available on the network. This is necessary if you are planning a multi-host setup and support for migration. The initialization also offers to configure the lxdbr0
bridge interface, which will be used by all containers. By default, this bridge is configured with IPv6 only. Containers created with the default configuration will have their veth0
virtual Ethernet adapter attached to lxdbr0
through a NAT network. This is the gateway for containers to communicate with the outside world. LXD also installs a local DHCP server and the dnsmasq
package. DHCP is used to assign IP addresses to containers, and dnsmasq
acts as a local name-resolution service.
If you misplace the network bridge configuration or need to update it, you can use the following command to get to the network configuration screen:
$ sudo dpkg-reconfigure -p medium lxd
There's more…
The LXD 2.0 version, which ships with Ubuntu 16, is an LTS version. If you want to get your hands on the latest release, then you can install stable versions from the following repository:
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable
For development releases, change the PPA to ppa:ubuntu-lxc/lxd-git-master
.
For more information, visit the LXC download page at https://linuxcontainers.org/lxc/downloads/.
If you still want to install LXC, you can. Use the following command:
$ sudo apt-get install lxc
This will install the required user space package and all the commands necessary to work directly with LXC. Note that all LXC commands are prefixed with lxc-
, for example, lxc-create
and lxc-info
. To get a list of all commands, type lxc-
in your terminal and press Tab twice.
See also
- For more information, check the LXD page of the Ubuntu Server guide: https://help.ubuntu.com/lts/serverguide/lxd.html
- The LXC blog post series is at https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-series/
- The LXD 2.0 blog post series is at https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
- Ubuntu 16.04 switched to Systemd, which provides its own container framework, systemd-nspawn; read more about systemd containers on its Ubuntu man page at http://manpages.ubuntu.com/manpages/xenial/man1/systemd-nspawn.1.html
- See how to get started with systemd containers at https://community.flockport.com/topic/32/systemd-nspawn-containers
There's more…
The LXD 2.0 version, which ships with Ubuntu 16, is an LTS version. If you want to get your hands on the latest release, then you can install stable versions from the following repository:
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable
For development releases, change the PPA to ppa:ubuntu-lxc/lxd-git-master
.
For more information, visit the LXC download page at https://linuxcontainers.org/lxc/downloads/.
If you still want to install LXC, you can. Use the following command:
$ sudo apt-get install lxc
This will install the required user space package and all the commands necessary to work directly with LXC. Note that all LXC commands are prefixed with lxc-
, for example, lxc-create
and lxc-info
. To get a list of all commands, type lxc-
in your terminal and press Tab twice.
See also
- For more information, check the LXD page of the Ubuntu Server guide: https://help.ubuntu.com/lts/serverguide/lxd.html
- The LXC blog post series is at https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-series/
- The LXD 2.0 blog post series is at https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
- Ubuntu 16.04 switched to Systemd, which provides its own container framework, systemd-nspawn; read more about systemd containers on its Ubuntu man page at http://manpages.ubuntu.com/manpages/xenial/man1/systemd-nspawn.1.html
- See how to get started with systemd containers at https://community.flockport.com/topic/32/systemd-nspawn-containers
See also
- For more information, check the LXD page of the Ubuntu Server guide: https://help.ubuntu.com/lts/serverguide/lxd.html
- The LXC blog post series is at https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-series/
- The LXD 2.0 blog post series is at https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
- Ubuntu 16.04 switched to Systemd, which provides its own container framework, systemd-nspawn; read more about systemd containers on its Ubuntu man page at http://manpages.ubuntu.com/manpages/xenial/man1/systemd-nspawn.1.html
- See how to get started with systemd containers at https://community.flockport.com/topic/32/systemd-nspawn-containers
Deploying your first container with LXD
In this recipe, we will create our first container with LXD.
Getting ready
You will need access to the root account or an account with sudo
privileges.
How to do it…
LXD works on the concept of remote servers and images served by those remote servers. Starting a new container with LXD is as simple as downloading a container image and starting a container out of it, all with a single command. Follow these steps:
- To start your first container, use the
lxc launch
command, as follows:$ lxc launch ubuntu:14.04/amd64 c1
LXC will download the required image (
14.04/amd64
) and start the container.You should see the progress like this:
- As you can see in the screenshot,
lxc launch
downloads the required image, creates a new container, and then starts it as well. You can see your new container in a list of containers with thelxc list
command, as follows:$ lxc list
- Optionally, you can get more details about the containers with the
lxc info
command:$ lxc info c1
- Now that your container is running, you can start working with it. With the
lxc exec
command, you can execute commands inside a container. Use the following command to obtain the details of Ubuntu running inside a container:$ lxc exec c1 -- lsb_release -a
- You can also open a bash shell inside a container, as follows:
$ lxc exec c1 -- bash
How it works…
Creating images is a time-consuming task. With LXD, the team has solved this problem by downloading the prebuilt images from trusted remote servers. Unlike LXC, where images are built locally, LXD downloads them from the remote servers and keep a local cache of these images for later use. The default installation contains three remote servers:
- Ubuntu: This contains all Ubuntu releases
- Ubuntu-daily: This contains all Ubuntu daily builds
- images: This contains all other Linux distributions
You can get a list of available remote servers with this command:
$ lxc remote list
Similarly, to get a list of available images on a specific remote server, use the following command:
$ lxc image list ubuntu:
In the previous example, we used 64-bit Ubuntu 14.04 from one of the preconfigured remote servers (ubuntu:
). When we start a specific container, LXD checks the local cache for the availability of the respective image; if it's not available locally, the required images gets fetched from the remote server and cached locally for later use. These images are kept in sync with remote updates. They also expire if not used for a specific time period, and expired images are automatically removed by LXD. By default, the expiration period is set to 10 days.
Note
You can find a list of various configuration parameters for LXC and LXD documented on GitHub at https://github.com/lxc/lxd/blob/master/doc/configuration.md.
The lxc launch
command creates a new container and then starts it as well. If you want to just create a container without starting it, you can do that with the lxc init
command, as follows:
$ lxc init ubuntu:xenial c2
All containers (or their rootfs
) are stored under the /var/lib/lxd/containers
directory, and images are stored under the /var/lib/lxd/images
directory.
Note
All LXD containers are non-privileged containers by default. You do not need any special privileges to create and manage containers. On the other hand, LXD does support privileged containers as well.
While starting a container, you can specify the set of configuration parameters using the --config
flag. LXD also supports configuration profiles. Profiles are a set of configuration parameters that can be applied to a group of containers. Additionally, a container can have multiple profiles. LXD ships with two preconfigured profiles: default
and docker
.
To get a list of profiles, use the lxc profile list
command, and to get the contents of a profile, use the lxc profile show <profile_name>
command.
Sometimes, you may need to start a container to experiment with something—execute a few random commands and then undo all the changes. LXD allows us to create such throwaway or ephemeral containers with the -e
flag. By default, all LXD containers are permanent containers. You can start an ephemeral container using the --ephemeral
or -e
flag. When stopped, an ephemeral container will be deleted automatically.
With LXD, you can start and manage containers on remote servers as well. For this, the LXD daemon needs to be exposed to the network. This can be done at the time of initializing LXD or with the following commands:
$ lxc config set core.https_address "[::]" $ lxc config set core.trust_password some-password
Next, make sure that you can access the remote server and add it as a remote for LXD with the lxc remote add
command:
$ lxc remote add remote01 192.168.0.11 # lxc remote add name server_ip
Now, you can launch containers on the remote server, as follows:
$ lxc launch ubuntu:xenial remote01:c1
There's more…
Unlike LXC, LXD container images do not support password-based SSH logins. The container still has the SSH daemon running, but login is restricted to a public key. You need to add a key to the container before you can log in with SSH. LXD supports file management with the lxc file
command; use it as follows to set your public key inside an Ubuntu container:
$ lxc file push ~/.ssh/id_rsa.pub \ c1/home/ubuntu/.ssh/authorized_keys \ --mode=0600 --uid=1000
Once the public key is set, you can use SSH to connect to the container, as follows:
$ ssh ubuntu@container_IP
Alternatively, you can directly open a root session inside a container and get a bash shell with lxc exec
, as follows:
$ lxc exec c1 -- bash
See also
- The LXD getting started guide: https://linuxcontainers.org/lxd/getting-started-cli/
- The Ubuntu Server guide for LXC: https://help.ubuntu.com/lts/serverguide/lxd.html
- Container images are created using tools such as debootstrap, which you can read more about at https://wiki.debian.org/Debootstrap
- Creating LXC templates from scratch: http://wiki.pcprobleemloos.nl/using_lxc_linux_containers_on_debian_squeeze/creating_a_lxc_virtual_machine_template
Getting ready
You will need access to the root account or an account with sudo
privileges.
How to do it…
LXD works on the concept of remote servers and images served by those remote servers. Starting a new container with LXD is as simple as downloading a container image and starting a container out of it, all with a single command. Follow these steps:
- To start your first container, use the
lxc launch
command, as follows:$ lxc launch ubuntu:14.04/amd64 c1
LXC will download the required image (
14.04/amd64
) and start the container.You should see the progress like this:
- As you can see in the screenshot,
lxc launch
downloads the required image, creates a new container, and then starts it as well. You can see your new container in a list of containers with thelxc list
command, as follows:$ lxc list
- Optionally, you can get more details about the containers with the
lxc info
command:$ lxc info c1
- Now that your container is running, you can start working with it. With the
lxc exec
command, you can execute commands inside a container. Use the following command to obtain the details of Ubuntu running inside a container:$ lxc exec c1 -- lsb_release -a
- You can also open a bash shell inside a container, as follows:
$ lxc exec c1 -- bash
How it works…
Creating images is a time-consuming task. With LXD, the team has solved this problem by downloading the prebuilt images from trusted remote servers. Unlike LXC, where images are built locally, LXD downloads them from the remote servers and keep a local cache of these images for later use. The default installation contains three remote servers:
- Ubuntu: This contains all Ubuntu releases
- Ubuntu-daily: This contains all Ubuntu daily builds
- images: This contains all other Linux distributions
You can get a list of available remote servers with this command:
$ lxc remote list
Similarly, to get a list of available images on a specific remote server, use the following command:
$ lxc image list ubuntu:
In the previous example, we used 64-bit Ubuntu 14.04 from one of the preconfigured remote servers (ubuntu:
). When we start a specific container, LXD checks the local cache for the availability of the respective image; if it's not available locally, the required images gets fetched from the remote server and cached locally for later use. These images are kept in sync with remote updates. They also expire if not used for a specific time period, and expired images are automatically removed by LXD. By default, the expiration period is set to 10 days.
Note
You can find a list of various configuration parameters for LXC and LXD documented on GitHub at https://github.com/lxc/lxd/blob/master/doc/configuration.md.
The lxc launch
command creates a new container and then starts it as well. If you want to just create a container without starting it, you can do that with the lxc init
command, as follows:
$ lxc init ubuntu:xenial c2
All containers (or their rootfs
) are stored under the /var/lib/lxd/containers
directory, and images are stored under the /var/lib/lxd/images
directory.
Note
All LXD containers are non-privileged containers by default. You do not need any special privileges to create and manage containers. On the other hand, LXD does support privileged containers as well.
While starting a container, you can specify the set of configuration parameters using the --config
flag. LXD also supports configuration profiles. Profiles are a set of configuration parameters that can be applied to a group of containers. Additionally, a container can have multiple profiles. LXD ships with two preconfigured profiles: default
and docker
.
To get a list of profiles, use the lxc profile list
command, and to get the contents of a profile, use the lxc profile show <profile_name>
command.
Sometimes, you may need to start a container to experiment with something—execute a few random commands and then undo all the changes. LXD allows us to create such throwaway or ephemeral containers with the -e
flag. By default, all LXD containers are permanent containers. You can start an ephemeral container using the --ephemeral
or -e
flag. When stopped, an ephemeral container will be deleted automatically.
With LXD, you can start and manage containers on remote servers as well. For this, the LXD daemon needs to be exposed to the network. This can be done at the time of initializing LXD or with the following commands:
$ lxc config set core.https_address "[::]" $ lxc config set core.trust_password some-password
Next, make sure that you can access the remote server and add it as a remote for LXD with the lxc remote add
command:
$ lxc remote add remote01 192.168.0.11 # lxc remote add name server_ip
Now, you can launch containers on the remote server, as follows:
$ lxc launch ubuntu:xenial remote01:c1
There's more…
Unlike LXC, LXD container images do not support password-based SSH logins. The container still has the SSH daemon running, but login is restricted to a public key. You need to add a key to the container before you can log in with SSH. LXD supports file management with the lxc file
command; use it as follows to set your public key inside an Ubuntu container:
$ lxc file push ~/.ssh/id_rsa.pub \ c1/home/ubuntu/.ssh/authorized_keys \ --mode=0600 --uid=1000
Once the public key is set, you can use SSH to connect to the container, as follows:
$ ssh ubuntu@container_IP
Alternatively, you can directly open a root session inside a container and get a bash shell with lxc exec
, as follows:
$ lxc exec c1 -- bash
See also
- The LXD getting started guide: https://linuxcontainers.org/lxd/getting-started-cli/
- The Ubuntu Server guide for LXC: https://help.ubuntu.com/lts/serverguide/lxd.html
- Container images are created using tools such as debootstrap, which you can read more about at https://wiki.debian.org/Debootstrap
- Creating LXC templates from scratch: http://wiki.pcprobleemloos.nl/using_lxc_linux_containers_on_debian_squeeze/creating_a_lxc_virtual_machine_template
How to do it…
LXD works on the concept of remote servers and images served by those remote servers. Starting a new container with LXD is as simple as downloading a container image and starting a container out of it, all with a single command. Follow these steps:
- To start your first container, use the
lxc launch
command, as follows:$ lxc launch ubuntu:14.04/amd64 c1
LXC will download the required image (
14.04/amd64
) and start the container.You should see the progress like this:
- As you can see in the screenshot,
lxc launch
downloads the required image, creates a new container, and then starts it as well. You can see your new container in a list of containers with thelxc list
command, as follows:$ lxc list
- Optionally, you can get more details about the containers with the
lxc info
command:$ lxc info c1
- Now that your container is running, you can start working with it. With the
lxc exec
command, you can execute commands inside a container. Use the following command to obtain the details of Ubuntu running inside a container:$ lxc exec c1 -- lsb_release -a
- You can also open a bash shell inside a container, as follows:
$ lxc exec c1 -- bash
How it works…
Creating images is a time-consuming task. With LXD, the team has solved this problem by downloading the prebuilt images from trusted remote servers. Unlike LXC, where images are built locally, LXD downloads them from the remote servers and keep a local cache of these images for later use. The default installation contains three remote servers:
- Ubuntu: This contains all Ubuntu releases
- Ubuntu-daily: This contains all Ubuntu daily builds
- images: This contains all other Linux distributions
You can get a list of available remote servers with this command:
$ lxc remote list
Similarly, to get a list of available images on a specific remote server, use the following command:
$ lxc image list ubuntu:
In the previous example, we used 64-bit Ubuntu 14.04 from one of the preconfigured remote servers (ubuntu:
). When we start a specific container, LXD checks the local cache for the availability of the respective image; if it's not available locally, the required images gets fetched from the remote server and cached locally for later use. These images are kept in sync with remote updates. They also expire if not used for a specific time period, and expired images are automatically removed by LXD. By default, the expiration period is set to 10 days.
Note
You can find a list of various configuration parameters for LXC and LXD documented on GitHub at https://github.com/lxc/lxd/blob/master/doc/configuration.md.
The lxc launch
command creates a new container and then starts it as well. If you want to just create a container without starting it, you can do that with the lxc init
command, as follows:
$ lxc init ubuntu:xenial c2
All containers (or their rootfs
) are stored under the /var/lib/lxd/containers
directory, and images are stored under the /var/lib/lxd/images
directory.
Note
All LXD containers are non-privileged containers by default. You do not need any special privileges to create and manage containers. On the other hand, LXD does support privileged containers as well.
While starting a container, you can specify the set of configuration parameters using the --config
flag. LXD also supports configuration profiles. Profiles are a set of configuration parameters that can be applied to a group of containers. Additionally, a container can have multiple profiles. LXD ships with two preconfigured profiles: default
and docker
.
To get a list of profiles, use the lxc profile list
command, and to get the contents of a profile, use the lxc profile show <profile_name>
command.
Sometimes, you may need to start a container to experiment with something—execute a few random commands and then undo all the changes. LXD allows us to create such throwaway or ephemeral containers with the -e
flag. By default, all LXD containers are permanent containers. You can start an ephemeral container using the --ephemeral
or -e
flag. When stopped, an ephemeral container will be deleted automatically.
With LXD, you can start and manage containers on remote servers as well. For this, the LXD daemon needs to be exposed to the network. This can be done at the time of initializing LXD or with the following commands:
$ lxc config set core.https_address "[::]" $ lxc config set core.trust_password some-password
Next, make sure that you can access the remote server and add it as a remote for LXD with the lxc remote add
command:
$ lxc remote add remote01 192.168.0.11 # lxc remote add name server_ip
Now, you can launch containers on the remote server, as follows:
$ lxc launch ubuntu:xenial remote01:c1
There's more…
Unlike LXC, LXD container images do not support password-based SSH logins. The container still has the SSH daemon running, but login is restricted to a public key. You need to add a key to the container before you can log in with SSH. LXD supports file management with the lxc file
command; use it as follows to set your public key inside an Ubuntu container:
$ lxc file push ~/.ssh/id_rsa.pub \ c1/home/ubuntu/.ssh/authorized_keys \ --mode=0600 --uid=1000
Once the public key is set, you can use SSH to connect to the container, as follows:
$ ssh ubuntu@container_IP
Alternatively, you can directly open a root session inside a container and get a bash shell with lxc exec
, as follows:
$ lxc exec c1 -- bash
See also
- The LXD getting started guide: https://linuxcontainers.org/lxd/getting-started-cli/
- The Ubuntu Server guide for LXC: https://help.ubuntu.com/lts/serverguide/lxd.html
- Container images are created using tools such as debootstrap, which you can read more about at https://wiki.debian.org/Debootstrap
- Creating LXC templates from scratch: http://wiki.pcprobleemloos.nl/using_lxc_linux_containers_on_debian_squeeze/creating_a_lxc_virtual_machine_template
How it works…
Creating images is a time-consuming task. With LXD, the team has solved this problem by downloading the prebuilt images from trusted remote servers. Unlike LXC, where images are built locally, LXD downloads them from the remote servers and keep a local cache of these images for later use. The default installation contains three remote servers:
- Ubuntu: This contains all Ubuntu releases
- Ubuntu-daily: This contains all Ubuntu daily builds
- images: This contains all other Linux distributions
You can get a list of available remote servers with this command:
$ lxc remote list
Similarly, to get a list of available images on a specific remote server, use the following command:
$ lxc image list ubuntu:
In the previous example, we used 64-bit Ubuntu 14.04 from one of the preconfigured remote servers (ubuntu:
). When we start a specific container, LXD checks the local cache for the availability of the respective image; if it's not available locally, the required images gets fetched from the remote server and cached locally for later use. These images are kept in sync with remote updates. They also expire if not used for a specific time period, and expired images are automatically removed by LXD. By default, the expiration period is set to 10 days.
Note
You can find a list of various configuration parameters for LXC and LXD documented on GitHub at https://github.com/lxc/lxd/blob/master/doc/configuration.md.
The lxc launch
command creates a new container and then starts it as well. If you want to just create a container without starting it, you can do that with the lxc init
command, as follows:
$ lxc init ubuntu:xenial c2
All containers (or their rootfs
) are stored under the /var/lib/lxd/containers
directory, and images are stored under the /var/lib/lxd/images
directory.
Note
All LXD containers are non-privileged containers by default. You do not need any special privileges to create and manage containers. On the other hand, LXD does support privileged containers as well.
While starting a container, you can specify the set of configuration parameters using the --config
flag. LXD also supports configuration profiles. Profiles are a set of configuration parameters that can be applied to a group of containers. Additionally, a container can have multiple profiles. LXD ships with two preconfigured profiles: default
and docker
.
To get a list of profiles, use the lxc profile list
command, and to get the contents of a profile, use the lxc profile show <profile_name>
command.
Sometimes, you may need to start a container to experiment with something—execute a few random commands and then undo all the changes. LXD allows us to create such throwaway or ephemeral containers with the -e
flag. By default, all LXD containers are permanent containers. You can start an ephemeral container using the --ephemeral
or -e
flag. When stopped, an ephemeral container will be deleted automatically.
With LXD, you can start and manage containers on remote servers as well. For this, the LXD daemon needs to be exposed to the network. This can be done at the time of initializing LXD or with the following commands:
$ lxc config set core.https_address "[::]" $ lxc config set core.trust_password some-password
Next, make sure that you can access the remote server and add it as a remote for LXD with the lxc remote add
command:
$ lxc remote add remote01 192.168.0.11 # lxc remote add name server_ip
Now, you can launch containers on the remote server, as follows:
$ lxc launch ubuntu:xenial remote01:c1
There's more…
Unlike LXC, LXD container images do not support password-based SSH logins. The container still has the SSH daemon running, but login is restricted to a public key. You need to add a key to the container before you can log in with SSH. LXD supports file management with the lxc file
command; use it as follows to set your public key inside an Ubuntu container:
$ lxc file push ~/.ssh/id_rsa.pub \ c1/home/ubuntu/.ssh/authorized_keys \ --mode=0600 --uid=1000
Once the public key is set, you can use SSH to connect to the container, as follows:
$ ssh ubuntu@container_IP
Alternatively, you can directly open a root session inside a container and get a bash shell with lxc exec
, as follows:
$ lxc exec c1 -- bash
See also
- The LXD getting started guide: https://linuxcontainers.org/lxd/getting-started-cli/
- The Ubuntu Server guide for LXC: https://help.ubuntu.com/lts/serverguide/lxd.html
- Container images are created using tools such as debootstrap, which you can read more about at https://wiki.debian.org/Debootstrap
- Creating LXC templates from scratch: http://wiki.pcprobleemloos.nl/using_lxc_linux_containers_on_debian_squeeze/creating_a_lxc_virtual_machine_template
There's more…
Unlike LXC, LXD container images do not support password-based SSH logins. The container still has the SSH daemon running, but login is restricted to a public key. You need to add a key to the container before you can log in with SSH. LXD supports file management with the lxc file
command; use it as follows to set your public key inside an Ubuntu container:
$ lxc file push ~/.ssh/id_rsa.pub \ c1/home/ubuntu/.ssh/authorized_keys \ --mode=0600 --uid=1000
Once the public key is set, you can use SSH to connect to the container, as follows:
$ ssh ubuntu@container_IP
Alternatively, you can directly open a root session inside a container and get a bash shell with lxc exec
, as follows:
$ lxc exec c1 -- bash
See also
- The LXD getting started guide: https://linuxcontainers.org/lxd/getting-started-cli/
- The Ubuntu Server guide for LXC: https://help.ubuntu.com/lts/serverguide/lxd.html
- Container images are created using tools such as debootstrap, which you can read more about at https://wiki.debian.org/Debootstrap
- Creating LXC templates from scratch: http://wiki.pcprobleemloos.nl/using_lxc_linux_containers_on_debian_squeeze/creating_a_lxc_virtual_machine_template
See also
- The LXD getting started guide: https://linuxcontainers.org/lxd/getting-started-cli/
- The Ubuntu Server guide for LXC: https://help.ubuntu.com/lts/serverguide/lxd.html
- Container images are created using tools such as debootstrap, which you can read more about at https://wiki.debian.org/Debootstrap
- Creating LXC templates from scratch: http://wiki.pcprobleemloos.nl/using_lxc_linux_containers_on_debian_squeeze/creating_a_lxc_virtual_machine_template
Managing LXD containers
We have installed LXD and deployed our first container with it. In this recipe, we will learn various LXD commands that manage the container lifecycle.
Getting ready…
Make sure that you have followed the previous recipes and created your first container.
How to do it…
Follow these steps to manage LXD containers:
- Before we start with container management, we will need a running container. If you have been following the previous recipes, you should already have a brand new container running on your system. If your container is not already running, you can start it with the
lxc start
command:$ lxc start c1
- To check the current state of a container, use
lxc list
, as follows:$ lxc list c1
This command should list only containers that have
c1
in their name. - You can also set the container to start automatically. Set the
boot.autostart
configuration option totrue
and your container will start automatically on system boot. Additionally, you can specify a delay before autostart and a priority in the autostart list:$ lxc config set c1 boot.autostart true
- Once your container is running, you can open a bash session inside a container using the
lxc exec
command:$ lxc exec c1 -- bash root@c1:~# hostname c1
This should give you a root shell inside a container. Note that to use bash, your container image should have a bash shell installed in it. With alpine containers, you need to use
sh
as the shell as alpine does not contain the bash shell. - LXD provides the option to pause a container when it's not being actively used. A paused container will still hold memory and other resources assigned to it, but not receive any CPU cycles:
$ lxc pause c1
- Containers that are paused can be started again with
lxc start
. - You can also restart a container with the
lxc restart
command, with the option to perform a stateful or stateless restart:$ lxc restart --stateless c1
- Once you are done working with the container, you can stop it with the
lxc stop
command. This will release all resources attached to that container:$ lxc stop c1
At this point, if your container is an ephemeral container, it will be deleted automatically.
- If the container is no longer required, you can explicitly delete it with the
lxc delete
command:$ lxc delete c1
There's more…
For those who do not like to work with command line tools, you can use a web-based management console known as LXD GUI. This package is still in beta but can be used on your local LXD deployments. It is available on GitHub at https://github.com/dobin/lxd-webgui.
See also
- Get more details about LXD at https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/
- LXC web panel: https://lxc-webpanel.github.io/install.html
Getting ready…
Make sure that you have followed the previous recipes and created your first container.
How to do it…
Follow these steps to manage LXD containers:
- Before we start with container management, we will need a running container. If you have been following the previous recipes, you should already have a brand new container running on your system. If your container is not already running, you can start it with the
lxc start
command:$ lxc start c1
- To check the current state of a container, use
lxc list
, as follows:$ lxc list c1
This command should list only containers that have
c1
in their name. - You can also set the container to start automatically. Set the
boot.autostart
configuration option totrue
and your container will start automatically on system boot. Additionally, you can specify a delay before autostart and a priority in the autostart list:$ lxc config set c1 boot.autostart true
- Once your container is running, you can open a bash session inside a container using the
lxc exec
command:$ lxc exec c1 -- bash root@c1:~# hostname c1
This should give you a root shell inside a container. Note that to use bash, your container image should have a bash shell installed in it. With alpine containers, you need to use
sh
as the shell as alpine does not contain the bash shell. - LXD provides the option to pause a container when it's not being actively used. A paused container will still hold memory and other resources assigned to it, but not receive any CPU cycles:
$ lxc pause c1
- Containers that are paused can be started again with
lxc start
. - You can also restart a container with the
lxc restart
command, with the option to perform a stateful or stateless restart:$ lxc restart --stateless c1
- Once you are done working with the container, you can stop it with the
lxc stop
command. This will release all resources attached to that container:$ lxc stop c1
At this point, if your container is an ephemeral container, it will be deleted automatically.
- If the container is no longer required, you can explicitly delete it with the
lxc delete
command:$ lxc delete c1
There's more…
For those who do not like to work with command line tools, you can use a web-based management console known as LXD GUI. This package is still in beta but can be used on your local LXD deployments. It is available on GitHub at https://github.com/dobin/lxd-webgui.
See also
- Get more details about LXD at https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/
- LXC web panel: https://lxc-webpanel.github.io/install.html
How to do it…
Follow these steps to manage LXD containers:
- Before we start with container management, we will need a running container. If you have been following the previous recipes, you should already have a brand new container running on your system. If your container is not already running, you can start it with the
lxc start
command:$ lxc start c1
- To check the current state of a container, use
lxc list
, as follows:$ lxc list c1
This command should list only containers that have
c1
in their name. - You can also set the container to start automatically. Set the
boot.autostart
configuration option totrue
and your container will start automatically on system boot. Additionally, you can specify a delay before autostart and a priority in the autostart list:$ lxc config set c1 boot.autostart true
- Once your container is running, you can open a bash session inside a container using the
lxc exec
command:$ lxc exec c1 -- bash root@c1:~# hostname c1
This should give you a root shell inside a container. Note that to use bash, your container image should have a bash shell installed in it. With alpine containers, you need to use
sh
as the shell as alpine does not contain the bash shell. - LXD provides the option to pause a container when it's not being actively used. A paused container will still hold memory and other resources assigned to it, but not receive any CPU cycles:
$ lxc pause c1
- Containers that are paused can be started again with
lxc start
. - You can also restart a container with the
lxc restart
command, with the option to perform a stateful or stateless restart:$ lxc restart --stateless c1
- Once you are done working with the container, you can stop it with the
lxc stop
command. This will release all resources attached to that container:$ lxc stop c1
At this point, if your container is an ephemeral container, it will be deleted automatically.
- If the container is no longer required, you can explicitly delete it with the
lxc delete
command:$ lxc delete c1
There's more…
For those who do not like to work with command line tools, you can use a web-based management console known as LXD GUI. This package is still in beta but can be used on your local LXD deployments. It is available on GitHub at https://github.com/dobin/lxd-webgui.
See also
- Get more details about LXD at https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/
- LXC web panel: https://lxc-webpanel.github.io/install.html
There's more…
For those who do not like to work with command line tools, you can use a web-based management console known as LXD GUI. This package is still in beta but can be used on your local LXD deployments. It is available on GitHub at https://github.com/dobin/lxd-webgui.
See also
- Get more details about LXD at https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/
- LXC web panel: https://lxc-webpanel.github.io/install.html
See also
- Get more details about LXD at https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/
- LXC web panel: https://lxc-webpanel.github.io/install.html
Managing LXD containers – advanced options
In this recipe, we will learn about some advanced options provided by LXD.
How to do it…
Follow these steps to deal with LXD containers:
- Sometimes, you may need to clone a container and have it running as a separate system. LXD provides a
copy
command to create such clones:$ lxc copy c1 c2 # lxc copy source destination
You can also create a temporary copy with the
--ephemeral
flag and it will be deleted after one use. - Similarly, you can create a container, configure it as per you requirements, have it stored as an image, and use it to create more containers. The
lxc publish
command allows you to export existing containers as a new image. The resulting image will contain all modifications from the original container:$ lxc publish c1 --alias nginx # after installing nginx
The container to be published should be in the stopped state. Alternatively, you can use the
--force
flag to publish a running container, which will internally stop the container before exporting. - You can also move the entire container from one system to another. The
move
command helps you with moving containers across hosts. If you move a container on the same host, the original container will be renamed. Note that the container to be renamed must not be running:$ lxc move c1 c2 # container c1 will be renamed to c2
- Finally, we have the snapshot and restore functionality. You can create snapshots of the container or, in simple terms, take a backup of its current state. The snapshot can be a stateful snapshot that stores the container's memory state. Use the following command to create a snapshot of your container:
$ lxc snapshot c1 snap1 # lxc snapshot container cnapshot
- The
lxc list
command will show you the number of snapshots for a given container. To get the details of every snapshot, check the container information with thelxc info
command:$ lxc info c1 ... Snapshots: c1/shap1 (taken at 2016/05/22 10:34 UTC) (stateless)
Tip
You can skip the snapshot name and LXD will name it for you. But, as of writing this, there's no option to add a description with snapshots. You can use the filename to describe the purpose of each snapshot.
- Once you have the snapshots created, you can restore it to go back to a point or create new containers out of your snapshots and have both states maintained. To restore your snapshot, use
lxc restore
, as follows:$ lxc restore c1 snap1 # lxc restore container snapshot
- To create a new container out of your snapshot, use
lxc copy
, as follows:$ lxc copy c1/snap1 c4 # lxc copy container/snapshot new_container
- When you no longer need a snapshot, delete it with
lxc delete
, as follows:$ lxc delete c1/snap1 # lxc delete container/snapshot
How it works…
Most of these commands work with the rootfs
or root
filesystem of containers
. The rootfs
is stored under the /var/lib/lxd/containers
directory. Copying creates a copy of the rootfs
while deleting removes the rootfs
for a given container. These commands benefit with the use of the ZFS file system. Features such as copy-on-write speed up the copy and snapshot operations while reducing the total disk space use.
How to do it…
Follow these steps to deal with LXD containers:
- Sometimes, you may need to clone a container and have it running as a separate system. LXD provides a
copy
command to create such clones:$ lxc copy c1 c2 # lxc copy source destination
You can also create a temporary copy with the
--ephemeral
flag and it will be deleted after one use. - Similarly, you can create a container, configure it as per you requirements, have it stored as an image, and use it to create more containers. The
lxc publish
command allows you to export existing containers as a new image. The resulting image will contain all modifications from the original container:$ lxc publish c1 --alias nginx # after installing nginx
The container to be published should be in the stopped state. Alternatively, you can use the
--force
flag to publish a running container, which will internally stop the container before exporting. - You can also move the entire container from one system to another. The
move
command helps you with moving containers across hosts. If you move a container on the same host, the original container will be renamed. Note that the container to be renamed must not be running:$ lxc move c1 c2 # container c1 will be renamed to c2
- Finally, we have the snapshot and restore functionality. You can create snapshots of the container or, in simple terms, take a backup of its current state. The snapshot can be a stateful snapshot that stores the container's memory state. Use the following command to create a snapshot of your container:
$ lxc snapshot c1 snap1 # lxc snapshot container cnapshot
- The
lxc list
command will show you the number of snapshots for a given container. To get the details of every snapshot, check the container information with thelxc info
command:$ lxc info c1 ... Snapshots: c1/shap1 (taken at 2016/05/22 10:34 UTC) (stateless)
Tip
You can skip the snapshot name and LXD will name it for you. But, as of writing this, there's no option to add a description with snapshots. You can use the filename to describe the purpose of each snapshot.
- Once you have the snapshots created, you can restore it to go back to a point or create new containers out of your snapshots and have both states maintained. To restore your snapshot, use
lxc restore
, as follows:$ lxc restore c1 snap1 # lxc restore container snapshot
- To create a new container out of your snapshot, use
lxc copy
, as follows:$ lxc copy c1/snap1 c4 # lxc copy container/snapshot new_container
- When you no longer need a snapshot, delete it with
lxc delete
, as follows:$ lxc delete c1/snap1 # lxc delete container/snapshot
How it works…
Most of these commands work with the rootfs
or root
filesystem of containers
. The rootfs
is stored under the /var/lib/lxd/containers
directory. Copying creates a copy of the rootfs
while deleting removes the rootfs
for a given container. These commands benefit with the use of the ZFS file system. Features such as copy-on-write speed up the copy and snapshot operations while reducing the total disk space use.
How it works…
Most of these commands work with the rootfs
or root
filesystem of containers
. The rootfs
is stored under the /var/lib/lxd/containers
directory. Copying creates a copy of the rootfs
while deleting removes the rootfs
for a given container. These commands benefit with the use of the ZFS file system. Features such as copy-on-write speed up the copy and snapshot operations while reducing the total disk space use.
Setting resource limits on LXD containers
In this recipe, we will learn to set resource limits on containers. LXD uses the cgroups feature in the Linux kernel to manage resource allocation and limits. Limits can be applied to a single container through configuration or set in a profile, applying limits to a group of containers at once. Limits can be dynamically updated even when the container is running.
How to do it…
We will create a new profile and configure various resource limits in it. Once the profile is ready, we can use it with any number of containers. Follow these steps:
- Create a new profile with the following command:
$ lxc profile create cookbook Profile cookbook created
- Next, edit the profile with
lxc profile edit
. This will open a text editor with a default profile structure in YML format:$ lxc profile edit cookbook
Add the following details to the profile. Feel free to select any parameters and change their values as required:
name: cookbook config: boot.autostart: "true" limits.cpu: "1" limits.cpu.priority: "10" limits.disk.priority: "10" limits.memory: 128MB limits.processes: "100" description: A profile for Ubuntu Cookbook Containers devices: eth0: nictype: bridged parent: lxdbr0 type: nic
Save your changes to the profile and exit the text editor.
- Optionally, you can check the created profile, as follows:
$ lxc profile show cookbook
- Now, our profile is ready and can be used with a container to set limits. Create a new container using our profile:
$ lxc launch ubuntu:xenial c4 -p cookbook
- This should create and start a new container with the
cookbook
profile applied to it. You can check the profile in use with thelxc info
command:$ lxc info c4
- Check the memory limits applied to container
c4
:$ lxc exec c4 -- free -m
- Profiles can be updated even when they are in use. All containers using that profile will be updated with the respective changes, or return a failure message. Update your profile as follows:
$ lxc profile set cookbook limits.memory 256MB
How it works…
LXD provides multiple options to set resource limits on containers. You can apply limits using profiles or configure containers separately with the lxc config
command. The advantage of creating profiles is that you can have various parameters defined in one central place, and all those parameters can be applied to multiple containers at once. A container can have multiple profiles applied and also have configuration parameters explicitly set. The overlapping parameters will take a value from the last applied profile. Also the parameters that are set explicitly using lxc config
will override any values set by profiles.
The LXD installation ships with two preconfigured profiles. One is default, which is applied to all containers that do not receive any other profile. This contains a network device for a container. The other profile, named docker, configures the required kernel modules to run Docker inside the container. You can view the parameters of any profile with the lxc profile show profile_name
command.
In the previous example, we used the edit
option to edit the profile and set multiple parameters at once. You can also set each parameter separately or update the profile with the set
option:
$ lxc profile set cookbook limits.memory 256MB
Similarly, use the get
option to read any single parameter from a profile:
$ lxc profile get cookbook limits.memory
Profiles can also be applied to a running container with lxc profile apply
. The following command will apply two profiles, default
and cookbook
, to an existing container, c6
:
$ lxc profile apply c6 default,cookbook
Tip
We could have skipped the network configuration in the cookbook
profile and had our containers use the default profile along with cookbook
to combine both configurations.
Updating the profiles will update the configuration for all container using that profile. To modify a single container, you can use lxc config set
or pass the parameters directly to a new container using the -c
flag:
$ lxc launch ubuntu:xenial c7 -c limits.memory=64MB
Similar to lxc profile
, you can use the edit option with lxc config
to modify multiple parameters at once. The same command can also be used to configure or read server parameters. When used without any container name, the command applies to the LXD daemon.
There's more…
The lxc profile
and lxc config
commands can also be used to attach local devices to containers. Both commands provide the option to work with various devices, which include network, disk IO, and so on. The simplest example will be to pass a local directory to a container, as follows:
$ lxc config device add c1 share disk \ source=/home/ubuntu path=home/ubuntu/shared
See also
- Read more about setting resource limits at https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412
- For more details about LXC configuration, check the help menu for the
lxc profile
andlxc config
commands, as follows:$ lxc config --help
How to do it…
We will create a new profile and configure various resource limits in it. Once the profile is ready, we can use it with any number of containers. Follow these steps:
- Create a new profile with the following command:
$ lxc profile create cookbook Profile cookbook created
- Next, edit the profile with
lxc profile edit
. This will open a text editor with a default profile structure in YML format:$ lxc profile edit cookbook
Add the following details to the profile. Feel free to select any parameters and change their values as required:
name: cookbook config: boot.autostart: "true" limits.cpu: "1" limits.cpu.priority: "10" limits.disk.priority: "10" limits.memory: 128MB limits.processes: "100" description: A profile for Ubuntu Cookbook Containers devices: eth0: nictype: bridged parent: lxdbr0 type: nic
Save your changes to the profile and exit the text editor.
- Optionally, you can check the created profile, as follows:
$ lxc profile show cookbook
- Now, our profile is ready and can be used with a container to set limits. Create a new container using our profile:
$ lxc launch ubuntu:xenial c4 -p cookbook
- This should create and start a new container with the
cookbook
profile applied to it. You can check the profile in use with thelxc info
command:$ lxc info c4
- Check the memory limits applied to container
c4
:$ lxc exec c4 -- free -m
- Profiles can be updated even when they are in use. All containers using that profile will be updated with the respective changes, or return a failure message. Update your profile as follows:
$ lxc profile set cookbook limits.memory 256MB
How it works…
LXD provides multiple options to set resource limits on containers. You can apply limits using profiles or configure containers separately with the lxc config
command. The advantage of creating profiles is that you can have various parameters defined in one central place, and all those parameters can be applied to multiple containers at once. A container can have multiple profiles applied and also have configuration parameters explicitly set. The overlapping parameters will take a value from the last applied profile. Also the parameters that are set explicitly using lxc config
will override any values set by profiles.
The LXD installation ships with two preconfigured profiles. One is default, which is applied to all containers that do not receive any other profile. This contains a network device for a container. The other profile, named docker, configures the required kernel modules to run Docker inside the container. You can view the parameters of any profile with the lxc profile show profile_name
command.
In the previous example, we used the edit
option to edit the profile and set multiple parameters at once. You can also set each parameter separately or update the profile with the set
option:
$ lxc profile set cookbook limits.memory 256MB
Similarly, use the get
option to read any single parameter from a profile:
$ lxc profile get cookbook limits.memory
Profiles can also be applied to a running container with lxc profile apply
. The following command will apply two profiles, default
and cookbook
, to an existing container, c6
:
$ lxc profile apply c6 default,cookbook
Tip
We could have skipped the network configuration in the cookbook
profile and had our containers use the default profile along with cookbook
to combine both configurations.
Updating the profiles will update the configuration for all container using that profile. To modify a single container, you can use lxc config set
or pass the parameters directly to a new container using the -c
flag:
$ lxc launch ubuntu:xenial c7 -c limits.memory=64MB
Similar to lxc profile
, you can use the edit option with lxc config
to modify multiple parameters at once. The same command can also be used to configure or read server parameters. When used without any container name, the command applies to the LXD daemon.
There's more…
The lxc profile
and lxc config
commands can also be used to attach local devices to containers. Both commands provide the option to work with various devices, which include network, disk IO, and so on. The simplest example will be to pass a local directory to a container, as follows:
$ lxc config device add c1 share disk \ source=/home/ubuntu path=home/ubuntu/shared
See also
- Read more about setting resource limits at https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412
- For more details about LXC configuration, check the help menu for the
lxc profile
andlxc config
commands, as follows:$ lxc config --help
How it works…
LXD provides multiple options to set resource limits on containers. You can apply limits using profiles or configure containers separately with the lxc config
command. The advantage of creating profiles is that you can have various parameters defined in one central place, and all those parameters can be applied to multiple containers at once. A container can have multiple profiles applied and also have configuration parameters explicitly set. The overlapping parameters will take a value from the last applied profile. Also the parameters that are set explicitly using lxc config
will override any values set by profiles.
The LXD installation ships with two preconfigured profiles. One is default, which is applied to all containers that do not receive any other profile. This contains a network device for a container. The other profile, named docker, configures the required kernel modules to run Docker inside the container. You can view the parameters of any profile with the lxc profile show profile_name
command.
In the previous example, we used the edit
option to edit the profile and set multiple parameters at once. You can also set each parameter separately or update the profile with the set
option:
$ lxc profile set cookbook limits.memory 256MB
Similarly, use the get
option to read any single parameter from a profile:
$ lxc profile get cookbook limits.memory
Profiles can also be applied to a running container with lxc profile apply
. The following command will apply two profiles, default
and cookbook
, to an existing container, c6
:
$ lxc profile apply c6 default,cookbook
Tip
We could have skipped the network configuration in the cookbook
profile and had our containers use the default profile along with cookbook
to combine both configurations.
Updating the profiles will update the configuration for all container using that profile. To modify a single container, you can use lxc config set
or pass the parameters directly to a new container using the -c
flag:
$ lxc launch ubuntu:xenial c7 -c limits.memory=64MB
Similar to lxc profile
, you can use the edit option with lxc config
to modify multiple parameters at once. The same command can also be used to configure or read server parameters. When used without any container name, the command applies to the LXD daemon.
There's more…
The lxc profile
and lxc config
commands can also be used to attach local devices to containers. Both commands provide the option to work with various devices, which include network, disk IO, and so on. The simplest example will be to pass a local directory to a container, as follows:
$ lxc config device add c1 share disk \ source=/home/ubuntu path=home/ubuntu/shared
See also
- Read more about setting resource limits at https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412
- For more details about LXC configuration, check the help menu for the
lxc profile
andlxc config
commands, as follows:$ lxc config --help
There's more…
The lxc profile
and lxc config
commands can also be used to attach local devices to containers. Both commands provide the option to work with various devices, which include network, disk IO, and so on. The simplest example will be to pass a local directory to a container, as follows:
$ lxc config device add c1 share disk \ source=/home/ubuntu path=home/ubuntu/shared
See also
- Read more about setting resource limits at https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412
- For more details about LXC configuration, check the help menu for the
lxc profile
andlxc config
commands, as follows:$ lxc config --help
See also
- Read more about setting resource limits at https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412
- For more details about LXC configuration, check the help menu for the
lxc profile
andlxc config
commands, as follows:$ lxc config --help
Networking with LXD
In this recipe, we will look at LXD network setup. By default, LXD creates an internal bridge network. Containers are set to access the Internet through Network Address Translation (NAT) but are not accessible from the Internet. We will learn to open a service on a container to the Internet, share a physical network with a host, and set a static IP address to a container.
Getting ready
As always, you will need access to the root account or an account with sudo
privileges.
Make sure that you have created at least one container.
How to do it…
By default, LXD sets up a NAT network for containers. This is a private network attached to the lxdbr0
port on the host system. With this setup, containers get access to the Internet, but the containers themselves or the services running in the containers are not accessible from an outside network. To open a container to an external network, you can either set up port forwarding or use a bridge to attach the container directly to the host's network:
- To set up port forwarding, use the
iptables
command, as follows:$ sudo iptables -t nat -A PREROUTING -p tcp -i eth0 \ --dport 80 -j DNAT --to 10.106.147.244:80
This will forward any traffic on the host TCP port
80
to the containers' TCP port80
with the IP10.106.147.244
. Make sure that you change the port and IP address as required. - You can also set a bridge that connects all containers directly to your local network. The bridge will use an Ethernet port to connect to the local network. To set a bridge network with the host, we first need to create a bridge on the host and then configure the container to use that bridge adapter.
To set up a bridge on the host, open the
/etc/network/interfaces
file and add the following lines:auto br0 iface br0 inet dhcp bridge_ports eth0
Make sure that you replace
eth0
with the name of the interface connected to the external network. - Enable IP forwarding under
sysctl
. Find the following line in/etc/sysctl.conf
and uncomment it:net.ipv4.ip_forward=1
- Start a new bridge interface with the
ifup
command:$ sudo ifup br0
Note
Note that if you are connected to a server over SSH, your connection will break. Make sure to have a snapshot of the working state before changing your network configuration.
- If required, you can restart the networking service, as follows:
$ sudo service networking restart
- Next, we need to update the LXD configuration to use our new bridge interface. Execute a reconfiguration of the LXD daemon and choose
<No>
when asked to create a new bridge:$ sudo dpkg-reconfigure -p medium lxd
- Then on the next page, choose
<Yes>
to use an existing bridge: - Enter the name of the newly created bridge interface:
This should configure LXD to use our own bridge network and skip the internal bridge. You can check the new configuration under the default profile:
$ lxc profile show default
- Now, start a new container. It should receive the IP address from the router on your local network. Make sure that your local network has DHCP configured:
How it works…
By default, LXD sets up a private network for all containers. A separate bridge, lxdbr0
, is set up and configured in the default profile. This network is shared (NAT) with the host system, and containers can access the Internet through this network. In the previous example, we used IPtables port forwarding to make the container port 80
available on the external network. This way, containers will still use the same private network, and a single application will be exposed to the external network through the host system. All incoming traffic on host port 80
will be directed to the container's port 80
.
You can also set up your own bridge connected to the physical network. With this bridge, all your containers can connect to and be directly accessible over your local network. Your local DHCP will be used to assign IP addresses to containers. Once you create a bridge, you need to configure it with LXD containers either through profiles or separately with container configuration. In the previous example, we reconfigured the LXD network to set a new bridge.
Tip
If you are using virtual machines for hosting containers and want to set up a bridge, then make sure that you have enabled promiscuous mode on the network adapter of the virtual machine. This can be enabled from the network settings of your hypervisor. Also, a bridge setup may not work if your physical machine is using a wireless network.
LXD supports more advanced network configuration by attaching the host eth
interface directly to a container. The following settings in the container configuration will set the network type to a physical network and use the host's eth0
directly inside a container. The eth0
interface will be unavailable for the host system till the container is live:
$ lxc config device add c1 eth0 nic nictype=physical parent=eth0
There's more…
LXD creates a default bridge with the name lxdbr0
. The configuration file for this bridge is located at /etc/default/lxd-bridge
. This file contains various configuration parameters, such as the address range for the bridge, default domain, and bridge name. An interesting parameter is the additional configuration path for dnsmasq configurations.
The LXD bridge internally uses dnsmasq for DHCP allocation. The additional configuration file can be used to set up various dnsmasq settings, such as address reservation and name resolution for containers.
Edit /etc/default/lxd-bridge
to point to the dnsmasq configuration file:
# Path to an extra dnsmasq configuration file LXD_CONFILE="/etc/default/dnsmasq.conf"
Then, create a new configuration file called /etc/default/dnsmasq.conf
with the following contents:
dhcp-host=c5,10.71.225.100 server=/lxd/10.71.225.1 #interface=lxdbr0
This will reserve the IP 10.71.225.100
for the container called c5
, and you can also ping containers with that name, as follows:
$ ping lxd.c5
See also
- Read more about bridge configuration at https://wiki.debian.org/LXC/SimpleBridge
- Find out more about LXD bridge at the following links:
- Read more about dnsmasq at https://wiki.debian.org/HowTo/dnsmasq
- Sample dnsmasq configuration file: http://oss.segetech.com/intra/srv/dnsmasq.conf
- Check the dnsmasq manual pages with the
man dnsmasq
command
Getting ready
As always, you will need access to the root account or an account with sudo
privileges.
Make sure that you have created at least one container.
How to do it…
By default, LXD sets up a NAT network for containers. This is a private network attached to the lxdbr0
port on the host system. With this setup, containers get access to the Internet, but the containers themselves or the services running in the containers are not accessible from an outside network. To open a container to an external network, you can either set up port forwarding or use a bridge to attach the container directly to the host's network:
- To set up port forwarding, use the
iptables
command, as follows:$ sudo iptables -t nat -A PREROUTING -p tcp -i eth0 \ --dport 80 -j DNAT --to 10.106.147.244:80
This will forward any traffic on the host TCP port
80
to the containers' TCP port80
with the IP10.106.147.244
. Make sure that you change the port and IP address as required. - You can also set a bridge that connects all containers directly to your local network. The bridge will use an Ethernet port to connect to the local network. To set a bridge network with the host, we first need to create a bridge on the host and then configure the container to use that bridge adapter.
To set up a bridge on the host, open the
/etc/network/interfaces
file and add the following lines:auto br0 iface br0 inet dhcp bridge_ports eth0
Make sure that you replace
eth0
with the name of the interface connected to the external network. - Enable IP forwarding under
sysctl
. Find the following line in/etc/sysctl.conf
and uncomment it:net.ipv4.ip_forward=1
- Start a new bridge interface with the
ifup
command:$ sudo ifup br0
Note
Note that if you are connected to a server over SSH, your connection will break. Make sure to have a snapshot of the working state before changing your network configuration.
- If required, you can restart the networking service, as follows:
$ sudo service networking restart
- Next, we need to update the LXD configuration to use our new bridge interface. Execute a reconfiguration of the LXD daemon and choose
<No>
when asked to create a new bridge:$ sudo dpkg-reconfigure -p medium lxd
- Then on the next page, choose
<Yes>
to use an existing bridge: - Enter the name of the newly created bridge interface:
This should configure LXD to use our own bridge network and skip the internal bridge. You can check the new configuration under the default profile:
$ lxc profile show default
- Now, start a new container. It should receive the IP address from the router on your local network. Make sure that your local network has DHCP configured:
How it works…
By default, LXD sets up a private network for all containers. A separate bridge, lxdbr0
, is set up and configured in the default profile. This network is shared (NAT) with the host system, and containers can access the Internet through this network. In the previous example, we used IPtables port forwarding to make the container port 80
available on the external network. This way, containers will still use the same private network, and a single application will be exposed to the external network through the host system. All incoming traffic on host port 80
will be directed to the container's port 80
.
You can also set up your own bridge connected to the physical network. With this bridge, all your containers can connect to and be directly accessible over your local network. Your local DHCP will be used to assign IP addresses to containers. Once you create a bridge, you need to configure it with LXD containers either through profiles or separately with container configuration. In the previous example, we reconfigured the LXD network to set a new bridge.
Tip
If you are using virtual machines for hosting containers and want to set up a bridge, then make sure that you have enabled promiscuous mode on the network adapter of the virtual machine. This can be enabled from the network settings of your hypervisor. Also, a bridge setup may not work if your physical machine is using a wireless network.
LXD supports more advanced network configuration by attaching the host eth
interface directly to a container. The following settings in the container configuration will set the network type to a physical network and use the host's eth0
directly inside a container. The eth0
interface will be unavailable for the host system till the container is live:
$ lxc config device add c1 eth0 nic nictype=physical parent=eth0
There's more…
LXD creates a default bridge with the name lxdbr0
. The configuration file for this bridge is located at /etc/default/lxd-bridge
. This file contains various configuration parameters, such as the address range for the bridge, default domain, and bridge name. An interesting parameter is the additional configuration path for dnsmasq configurations.
The LXD bridge internally uses dnsmasq for DHCP allocation. The additional configuration file can be used to set up various dnsmasq settings, such as address reservation and name resolution for containers.
Edit /etc/default/lxd-bridge
to point to the dnsmasq configuration file:
# Path to an extra dnsmasq configuration file LXD_CONFILE="/etc/default/dnsmasq.conf"
Then, create a new configuration file called /etc/default/dnsmasq.conf
with the following contents:
dhcp-host=c5,10.71.225.100 server=/lxd/10.71.225.1 #interface=lxdbr0
This will reserve the IP 10.71.225.100
for the container called c5
, and you can also ping containers with that name, as follows:
$ ping lxd.c5
See also
- Read more about bridge configuration at https://wiki.debian.org/LXC/SimpleBridge
- Find out more about LXD bridge at the following links:
- Read more about dnsmasq at https://wiki.debian.org/HowTo/dnsmasq
- Sample dnsmasq configuration file: http://oss.segetech.com/intra/srv/dnsmasq.conf
- Check the dnsmasq manual pages with the
man dnsmasq
command
How to do it…
By default, LXD sets up a NAT network for containers. This is a private network attached to the lxdbr0
port on the host system. With this setup, containers get access to the Internet, but the containers themselves or the services running in the containers are not accessible from an outside network. To open a container to an external network, you can either set up port forwarding or use a bridge to attach the container directly to the host's network:
- To set up port forwarding, use the
iptables
command, as follows:$ sudo iptables -t nat -A PREROUTING -p tcp -i eth0 \ --dport 80 -j DNAT --to 10.106.147.244:80
This will forward any traffic on the host TCP port
80
to the containers' TCP port80
with the IP10.106.147.244
. Make sure that you change the port and IP address as required. - You can also set a bridge that connects all containers directly to your local network. The bridge will use an Ethernet port to connect to the local network. To set a bridge network with the host, we first need to create a bridge on the host and then configure the container to use that bridge adapter.
To set up a bridge on the host, open the
/etc/network/interfaces
file and add the following lines:auto br0 iface br0 inet dhcp bridge_ports eth0
Make sure that you replace
eth0
with the name of the interface connected to the external network. - Enable IP forwarding under
sysctl
. Find the following line in/etc/sysctl.conf
and uncomment it:net.ipv4.ip_forward=1
- Start a new bridge interface with the
ifup
command:$ sudo ifup br0
Note
Note that if you are connected to a server over SSH, your connection will break. Make sure to have a snapshot of the working state before changing your network configuration.
- If required, you can restart the networking service, as follows:
$ sudo service networking restart
- Next, we need to update the LXD configuration to use our new bridge interface. Execute a reconfiguration of the LXD daemon and choose
<No>
when asked to create a new bridge:$ sudo dpkg-reconfigure -p medium lxd
- Then on the next page, choose
<Yes>
to use an existing bridge: - Enter the name of the newly created bridge interface:
This should configure LXD to use our own bridge network and skip the internal bridge. You can check the new configuration under the default profile:
$ lxc profile show default
- Now, start a new container. It should receive the IP address from the router on your local network. Make sure that your local network has DHCP configured:
How it works…
By default, LXD sets up a private network for all containers. A separate bridge, lxdbr0
, is set up and configured in the default profile. This network is shared (NAT) with the host system, and containers can access the Internet through this network. In the previous example, we used IPtables port forwarding to make the container port 80
available on the external network. This way, containers will still use the same private network, and a single application will be exposed to the external network through the host system. All incoming traffic on host port 80
will be directed to the container's port 80
.
You can also set up your own bridge connected to the physical network. With this bridge, all your containers can connect to and be directly accessible over your local network. Your local DHCP will be used to assign IP addresses to containers. Once you create a bridge, you need to configure it with LXD containers either through profiles or separately with container configuration. In the previous example, we reconfigured the LXD network to set a new bridge.
Tip
If you are using virtual machines for hosting containers and want to set up a bridge, then make sure that you have enabled promiscuous mode on the network adapter of the virtual machine. This can be enabled from the network settings of your hypervisor. Also, a bridge setup may not work if your physical machine is using a wireless network.
LXD supports more advanced network configuration by attaching the host eth
interface directly to a container. The following settings in the container configuration will set the network type to a physical network and use the host's eth0
directly inside a container. The eth0
interface will be unavailable for the host system till the container is live:
$ lxc config device add c1 eth0 nic nictype=physical parent=eth0
There's more…
LXD creates a default bridge with the name lxdbr0
. The configuration file for this bridge is located at /etc/default/lxd-bridge
. This file contains various configuration parameters, such as the address range for the bridge, default domain, and bridge name. An interesting parameter is the additional configuration path for dnsmasq configurations.
The LXD bridge internally uses dnsmasq for DHCP allocation. The additional configuration file can be used to set up various dnsmasq settings, such as address reservation and name resolution for containers.
Edit /etc/default/lxd-bridge
to point to the dnsmasq configuration file:
# Path to an extra dnsmasq configuration file LXD_CONFILE="/etc/default/dnsmasq.conf"
Then, create a new configuration file called /etc/default/dnsmasq.conf
with the following contents:
dhcp-host=c5,10.71.225.100 server=/lxd/10.71.225.1 #interface=lxdbr0
This will reserve the IP 10.71.225.100
for the container called c5
, and you can also ping containers with that name, as follows:
$ ping lxd.c5
See also
- Read more about bridge configuration at https://wiki.debian.org/LXC/SimpleBridge
- Find out more about LXD bridge at the following links:
- Read more about dnsmasq at https://wiki.debian.org/HowTo/dnsmasq
- Sample dnsmasq configuration file: http://oss.segetech.com/intra/srv/dnsmasq.conf
- Check the dnsmasq manual pages with the
man dnsmasq
command
How it works…
By default, LXD sets up a private network for all containers. A separate bridge, lxdbr0
, is set up and configured in the default profile. This network is shared (NAT) with the host system, and containers can access the Internet through this network. In the previous example, we used IPtables port forwarding to make the container port 80
available on the external network. This way, containers will still use the same private network, and a single application will be exposed to the external network through the host system. All incoming traffic on host port 80
will be directed to the container's port 80
.
You can also set up your own bridge connected to the physical network. With this bridge, all your containers can connect to and be directly accessible over your local network. Your local DHCP will be used to assign IP addresses to containers. Once you create a bridge, you need to configure it with LXD containers either through profiles or separately with container configuration. In the previous example, we reconfigured the LXD network to set a new bridge.
Tip
If you are using virtual machines for hosting containers and want to set up a bridge, then make sure that you have enabled promiscuous mode on the network adapter of the virtual machine. This can be enabled from the network settings of your hypervisor. Also, a bridge setup may not work if your physical machine is using a wireless network.
LXD supports more advanced network configuration by attaching the host eth
interface directly to a container. The following settings in the container configuration will set the network type to a physical network and use the host's eth0
directly inside a container. The eth0
interface will be unavailable for the host system till the container is live:
$ lxc config device add c1 eth0 nic nictype=physical parent=eth0
There's more…
LXD creates a default bridge with the name lxdbr0
. The configuration file for this bridge is located at /etc/default/lxd-bridge
. This file contains various configuration parameters, such as the address range for the bridge, default domain, and bridge name. An interesting parameter is the additional configuration path for dnsmasq configurations.
The LXD bridge internally uses dnsmasq for DHCP allocation. The additional configuration file can be used to set up various dnsmasq settings, such as address reservation and name resolution for containers.
Edit /etc/default/lxd-bridge
to point to the dnsmasq configuration file:
# Path to an extra dnsmasq configuration file LXD_CONFILE="/etc/default/dnsmasq.conf"
Then, create a new configuration file called /etc/default/dnsmasq.conf
with the following contents:
dhcp-host=c5,10.71.225.100 server=/lxd/10.71.225.1 #interface=lxdbr0
This will reserve the IP 10.71.225.100
for the container called c5
, and you can also ping containers with that name, as follows:
$ ping lxd.c5
See also
- Read more about bridge configuration at https://wiki.debian.org/LXC/SimpleBridge
- Find out more about LXD bridge at the following links:
- Read more about dnsmasq at https://wiki.debian.org/HowTo/dnsmasq
- Sample dnsmasq configuration file: http://oss.segetech.com/intra/srv/dnsmasq.conf
- Check the dnsmasq manual pages with the
man dnsmasq
command
There's more…
LXD creates a default bridge with the name lxdbr0
. The configuration file for this bridge is located at /etc/default/lxd-bridge
. This file contains various configuration parameters, such as the address range for the bridge, default domain, and bridge name. An interesting parameter is the additional configuration path for dnsmasq configurations.
The LXD bridge internally uses dnsmasq for DHCP allocation. The additional configuration file can be used to set up various dnsmasq settings, such as address reservation and name resolution for containers.
Edit /etc/default/lxd-bridge
to point to the dnsmasq configuration file:
# Path to an extra dnsmasq configuration file LXD_CONFILE="/etc/default/dnsmasq.conf"
Then, create a new configuration file called /etc/default/dnsmasq.conf
with the following contents:
dhcp-host=c5,10.71.225.100 server=/lxd/10.71.225.1 #interface=lxdbr0
This will reserve the IP 10.71.225.100
for the container called c5
, and you can also ping containers with that name, as follows:
$ ping lxd.c5
See also
- Read more about bridge configuration at https://wiki.debian.org/LXC/SimpleBridge
- Find out more about LXD bridge at the following links:
- Read more about dnsmasq at https://wiki.debian.org/HowTo/dnsmasq
- Sample dnsmasq configuration file: http://oss.segetech.com/intra/srv/dnsmasq.conf
- Check the dnsmasq manual pages with the
man dnsmasq
command
See also
- Read more about bridge configuration at https://wiki.debian.org/LXC/SimpleBridge
- Find out more about LXD bridge at the following links:
- Read more about dnsmasq at https://wiki.debian.org/HowTo/dnsmasq
- Sample dnsmasq configuration file: http://oss.segetech.com/intra/srv/dnsmasq.conf
- Check the dnsmasq manual pages with the
man dnsmasq
command
Installing Docker
In last few recipes, we learned about LXD, an operating system container service. Now, we will look at a hot new technology called Docker. Docker is an application container designed to package and run a single service. It enables developers to enclose an app with all dependencies in an isolated container environment. Docker helps developers create a reproducible environment with a simple configuration file called a Dockerfile. It also provides portability by sharing the Dockerfile, and developers can be sure that their setup will work the same on any system with the Docker runtime.
Docker is very similar to LXC. Its development started as a wrapper around the LXC API to help DevOps take advantage of containerization. It added some restrictions to allow only a single process to be running in a container, unlike a whole operating system in LXC. In subsequent versions, Docker changed its focus from LXC and started working on a new standard library for application containers, known as libcontainer.
It still uses the same base technologies, such as Linux namespaces and control groups, and shares the same kernel with the host operating system. Similarly, Docker makes use of operating system images to run containers. Docker images are a collection of multiple layers, with each layer adding something new to the base layer. This something new can include a service, such as a web server, application code, or even a new set of configurations. Each layer is independent of the layers above it and can be reused to create a new image.
Being an application container, Docker encourages the use of a microservice-based distributed architecture. Think of deploying a simple WordPress blog. With Docker, you will need to create at least two different containers, one for the MySQL server and the other for the WordPress code with PHP and the web server. You can separate PHP and web servers in their own containers. While this looks like extra effort, it makes your application much more flexible. It enables you to scale each component separately and improves application availability by separating failure points.
While both LXC and Docker use containerization technologies, their use cases are different. LXC enables you to run an entire lightweight virtual machine in a container, eliminating the inefficiencies of virtualization. Docker enables you to quickly create and share a self-dependent package with your application, which can be deployed on any system running Docker.
In this recipe, we will cover the installation of Docker on Ubuntu Server. The recipes after that will focus on various features provided by Docker.
Getting ready
You will need access to the root account or an account with sudo
privileges.
How to do it…
Recently, Docker released version 1.11 of the Docker engine. We will follow the installation steps provided on the Docker site to install the latest available version:
- First, add a new gpg key:
$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
- Next, add a new repository to the local installation sources. This repository is maintained by Docker and contains Docker packages for 1.7.1 and higher versions:
$ echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" | \ sudo tee /etc/apt/sources.list.d/docker.list
Note
If you are using an Ubuntu version other than 16.04 (Xenial), then make sure that you replace the repository path with the respective codename. For example, on Ubuntu 14.04 (Trusty), use the following repository:
deb https://apt.dockerproject.org/repo ubuntu-trusty main
- Next, update the
apt
package list and install Docker with the following commands:$ sudo apt-get update $ sudo apt-get install docker-engine
- Once the installation completes, you can check the status of the Docker service, as follows:
$ sudo service docket status
- Check the installed Docker version with
docker version
:$ sudo docker version Client: Version: 1.11.1 API version: 1.23 ... Server: Version: 1.11.1 API version: 1.23 ...
- Download a test container to test the installation. This container will simply print a welcome message and then exit:
$ sudo docker run hello-world
- At this point, you need to use
sudo
with every Docker command. To enable a non-sudo user to use Docker, or to simply avoid the repeated use ofsudo
, add the respective usernames to thedocker
group:$ sudo gpasswd -a ubuntu docker
Note
The
docker
group has privileges equivalent to the root account. Check the official Docker installation documentation for more details.Now, update group membership, and you can use Docker without the
sudo
command:$ newgrp docker
How it works…
This recipe installs Docker from the official Docker repository. This way, we can be sure to get the latest version. The Ubuntu 16.04 repository also contains the package for Docker with version 1.10. If you prefer to install from the Ubuntu repository, it's an even easier task with a single command, as follows:
$ sudo apt-get install docker.io
As of writing this, Docker 1.11 is the latest stable release and the first release to have been built on Open Container Initiative standards. This version is built on runc and containerd.
There's more…
Docker provides a quick installation script, which can be used to install Docker with a single command. This scripts reads the basic details of your operating system, such as the distribution and version, and then executes all the required steps to install Docker. You can use the bootstrap script as follows:
$ sudo curl -sSL https://get.docker.com | sudo sh
Note that with this command, the script will be executed with sudo
privileges. Make sure you cross-check the script's contents before executing it. You can download the script without executing it, as follows:
$ curl -sSL https://get.docker.com -o docker_install.sh
See also
- The Docker installation guide: http://docs.docker.com/installation/ubuntulinux/
- Operating system containers versus application containers: https://blog.risingstack.com/operating-system-containers-vs-application-containers/
- What Docker adds to lxc-tools: http://stackoverflow.com/questions/17989306/what-does-docker-add-to-lxc-tools-the-userspace-lxc-tools
- A curated list of Docker resources: https://github.com/veggiemonk/awesome-docker
Getting ready
You will need access to the root account or an account with sudo
privileges.
How to do it…
Recently, Docker released version 1.11 of the Docker engine. We will follow the installation steps provided on the Docker site to install the latest available version:
- First, add a new gpg key:
$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
- Next, add a new repository to the local installation sources. This repository is maintained by Docker and contains Docker packages for 1.7.1 and higher versions:
$ echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" | \ sudo tee /etc/apt/sources.list.d/docker.list
Note
If you are using an Ubuntu version other than 16.04 (Xenial), then make sure that you replace the repository path with the respective codename. For example, on Ubuntu 14.04 (Trusty), use the following repository:
deb https://apt.dockerproject.org/repo ubuntu-trusty main
- Next, update the
apt
package list and install Docker with the following commands:$ sudo apt-get update $ sudo apt-get install docker-engine
- Once the installation completes, you can check the status of the Docker service, as follows:
$ sudo service docket status
- Check the installed Docker version with
docker version
:$ sudo docker version Client: Version: 1.11.1 API version: 1.23 ... Server: Version: 1.11.1 API version: 1.23 ...
- Download a test container to test the installation. This container will simply print a welcome message and then exit:
$ sudo docker run hello-world
- At this point, you need to use
sudo
with every Docker command. To enable a non-sudo user to use Docker, or to simply avoid the repeated use ofsudo
, add the respective usernames to thedocker
group:$ sudo gpasswd -a ubuntu docker
Note
The
docker
group has privileges equivalent to the root account. Check the official Docker installation documentation for more details.Now, update group membership, and you can use Docker without the
sudo
command:$ newgrp docker
How it works…
This recipe installs Docker from the official Docker repository. This way, we can be sure to get the latest version. The Ubuntu 16.04 repository also contains the package for Docker with version 1.10. If you prefer to install from the Ubuntu repository, it's an even easier task with a single command, as follows:
$ sudo apt-get install docker.io
As of writing this, Docker 1.11 is the latest stable release and the first release to have been built on Open Container Initiative standards. This version is built on runc and containerd.
There's more…
Docker provides a quick installation script, which can be used to install Docker with a single command. This scripts reads the basic details of your operating system, such as the distribution and version, and then executes all the required steps to install Docker. You can use the bootstrap script as follows:
$ sudo curl -sSL https://get.docker.com | sudo sh
Note that with this command, the script will be executed with sudo
privileges. Make sure you cross-check the script's contents before executing it. You can download the script without executing it, as follows:
$ curl -sSL https://get.docker.com -o docker_install.sh
See also
- The Docker installation guide: http://docs.docker.com/installation/ubuntulinux/
- Operating system containers versus application containers: https://blog.risingstack.com/operating-system-containers-vs-application-containers/
- What Docker adds to lxc-tools: http://stackoverflow.com/questions/17989306/what-does-docker-add-to-lxc-tools-the-userspace-lxc-tools
- A curated list of Docker resources: https://github.com/veggiemonk/awesome-docker
How to do it…
Recently, Docker released version 1.11 of the Docker engine. We will follow the installation steps provided on the Docker site to install the latest available version:
- First, add a new gpg key:
$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
- Next, add a new repository to the local installation sources. This repository is maintained by Docker and contains Docker packages for 1.7.1 and higher versions:
$ echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" | \ sudo tee /etc/apt/sources.list.d/docker.list
Note
If you are using an Ubuntu version other than 16.04 (Xenial), then make sure that you replace the repository path with the respective codename. For example, on Ubuntu 14.04 (Trusty), use the following repository:
deb https://apt.dockerproject.org/repo ubuntu-trusty main
- Next, update the
apt
package list and install Docker with the following commands:$ sudo apt-get update $ sudo apt-get install docker-engine
- Once the installation completes, you can check the status of the Docker service, as follows:
$ sudo service docket status
- Check the installed Docker version with
docker version
:$ sudo docker version Client: Version: 1.11.1 API version: 1.23 ... Server: Version: 1.11.1 API version: 1.23 ...
- Download a test container to test the installation. This container will simply print a welcome message and then exit:
$ sudo docker run hello-world
- At this point, you need to use
sudo
with every Docker command. To enable a non-sudo user to use Docker, or to simply avoid the repeated use ofsudo
, add the respective usernames to thedocker
group:$ sudo gpasswd -a ubuntu docker
Note
The
docker
group has privileges equivalent to the root account. Check the official Docker installation documentation for more details.Now, update group membership, and you can use Docker without the
sudo
command:$ newgrp docker
How it works…
This recipe installs Docker from the official Docker repository. This way, we can be sure to get the latest version. The Ubuntu 16.04 repository also contains the package for Docker with version 1.10. If you prefer to install from the Ubuntu repository, it's an even easier task with a single command, as follows:
$ sudo apt-get install docker.io
As of writing this, Docker 1.11 is the latest stable release and the first release to have been built on Open Container Initiative standards. This version is built on runc and containerd.
There's more…
Docker provides a quick installation script, which can be used to install Docker with a single command. This scripts reads the basic details of your operating system, such as the distribution and version, and then executes all the required steps to install Docker. You can use the bootstrap script as follows:
$ sudo curl -sSL https://get.docker.com | sudo sh
Note that with this command, the script will be executed with sudo
privileges. Make sure you cross-check the script's contents before executing it. You can download the script without executing it, as follows:
$ curl -sSL https://get.docker.com -o docker_install.sh
See also
- The Docker installation guide: http://docs.docker.com/installation/ubuntulinux/
- Operating system containers versus application containers: https://blog.risingstack.com/operating-system-containers-vs-application-containers/
- What Docker adds to lxc-tools: http://stackoverflow.com/questions/17989306/what-does-docker-add-to-lxc-tools-the-userspace-lxc-tools
- A curated list of Docker resources: https://github.com/veggiemonk/awesome-docker
How it works…
This recipe installs Docker from the official Docker repository. This way, we can be sure to get the latest version. The Ubuntu 16.04 repository also contains the package for Docker with version 1.10. If you prefer to install from the Ubuntu repository, it's an even easier task with a single command, as follows:
$ sudo apt-get install docker.io
As of writing this, Docker 1.11 is the latest stable release and the first release to have been built on Open Container Initiative standards. This version is built on runc and containerd.
There's more…
Docker provides a quick installation script, which can be used to install Docker with a single command. This scripts reads the basic details of your operating system, such as the distribution and version, and then executes all the required steps to install Docker. You can use the bootstrap script as follows:
$ sudo curl -sSL https://get.docker.com | sudo sh
Note that with this command, the script will be executed with sudo
privileges. Make sure you cross-check the script's contents before executing it. You can download the script without executing it, as follows:
$ curl -sSL https://get.docker.com -o docker_install.sh
See also
- The Docker installation guide: http://docs.docker.com/installation/ubuntulinux/
- Operating system containers versus application containers: https://blog.risingstack.com/operating-system-containers-vs-application-containers/
- What Docker adds to lxc-tools: http://stackoverflow.com/questions/17989306/what-does-docker-add-to-lxc-tools-the-userspace-lxc-tools
- A curated list of Docker resources: https://github.com/veggiemonk/awesome-docker
There's more…
Docker provides a quick installation script, which can be used to install Docker with a single command. This scripts reads the basic details of your operating system, such as the distribution and version, and then executes all the required steps to install Docker. You can use the bootstrap script as follows:
$ sudo curl -sSL https://get.docker.com | sudo sh
Note that with this command, the script will be executed with sudo
privileges. Make sure you cross-check the script's contents before executing it. You can download the script without executing it, as follows:
$ curl -sSL https://get.docker.com -o docker_install.sh
See also
- The Docker installation guide: http://docs.docker.com/installation/ubuntulinux/
- Operating system containers versus application containers: https://blog.risingstack.com/operating-system-containers-vs-application-containers/
- What Docker adds to lxc-tools: http://stackoverflow.com/questions/17989306/what-does-docker-add-to-lxc-tools-the-userspace-lxc-tools
- A curated list of Docker resources: https://github.com/veggiemonk/awesome-docker
See also
- The Docker installation guide: http://docs.docker.com/installation/ubuntulinux/
- Operating system containers versus application containers: https://blog.risingstack.com/operating-system-containers-vs-application-containers/
- What Docker adds to lxc-tools: http://stackoverflow.com/questions/17989306/what-does-docker-add-to-lxc-tools-the-userspace-lxc-tools
- A curated list of Docker resources: https://github.com/veggiemonk/awesome-docker
Starting and managing Docker containers
So, we have installed the latest Docker binary. In this recipe, we will start a new container with Docker. We will see some basic Docker commands to start and manage Docker containers.
Getting ready
Make sure that you have installed Docker and set your user as a member of the Docker group.
You may need sudo
privileges for some commands.
How to do it…
Let's create a new Docker container and start it. With Docker, you can quickly start a container with the docker run
command:
- Start a new Docker container with the following command:
$ docker run -it --name dc1 ubuntu /bin/bash Unable to find image 'ubuntu:trusty' locally trusty: Pulling from library/ubuntu 6599cadaf950: Pull complete 23eda618d451: Pull complete ... Status: Downloaded newer image for ubuntu:trusty root@bd8c99397e52:/#
Once a container has been started, it will drop you in a new shell running inside it. From here, you can execute limited Ubuntu or general Linux commands, which will be executed inside the container.
- When you are done with the container, you can exit from the shell by typing
exit
or pressing Ctrl + D. This will terminate your shell and stop the container as well. - Use the
docker ps
command to list all the containers and check the status of your last container:$ docker ps -a
By default,
docker ps
lists all running containers. As our container is no longer running, we need to use the-a
flag to list all available containers. - To start the container again, you can use the
docker start
command. You can use the container name or ID to specify the container to be started:$ docker start -ia dc1
The
-i
flag will start the container in interactive mode and the-a
flag will attach to a terminal inside the container. To start a container in detached mode, use thestart
command without any flags. This will start the container in the background and return to the host shell:$ docker start dc1
- You can open a terminal inside a detached container with
docker attach
:$ docker attach dc1
- Now, to detach a terminal and keep the container running, you need the key combinations Ctrl + P and Ctrl + Q. Alternatively, you can type
exit
or press Ctrl + C to exit the terminal and stop the container. - To get all the details of a container, use the
docker inspect
command with the name or ID of the container:$ docker inspect dc1 | less
This command will list all the details of the container, including container status, network status and address, and container configuration files.
Tip
Use
grep
to filter container information. For example, to get the IP address from thedocker inspect
output, use this:$ docker inspect dc1 | grep-iipaddr
- To execute a command inside a container, use
docker exec
. For example, the following command gets the environment variables from thedc1
container:$ docker exec dc1 env
This one gets the IP address of a container:
$ docker exec dc1 ifconfig
- To get the processes running inside a container, use the
docker top
command:$ docker top dc1
- Finally, to stop the container, use
docker stop
, which will gracefully stop the container after stopping processes running inside it:$ docker stop dc1
- When you no longer need the container, you can use
docker rm
to remove/delete it:$ docker rm dc1
Tip
Want to remove all stopped containers with a single command? Use this:
$ docker rm $(dockerps -aq)
How it works…
We started our first Docker container with the docker run
command. With this command, we instructed the Docker daemon to start a new container with an image called Ubuntu, start an interactive session (-i
), and allocate a terminal (-t
). We also elected to name our container with the --name
flag and execute the /bin/bash
command inside a container once it started.
The Docker daemon will search for Ubuntu images in the local cache or download the image from Docker Hub if the specified image is not available in the local cache. Docker Hub is a central Docker image repository. It will take some time to download and extract all the layers of the images. Docker maintains container images in the form of multiple layers. These layers can be shared across multiple container images. For example, if you have Ubuntu running on a server and you need to download the Apache container based on Ubuntu, Docker will only download the additional layer for Apache as it already has Ubuntu in the local cache, which can be reused.
Docker provides various other commands to manage containers and images. We have already used a few of them in the previous example. You can get the full list of all available commands from the command prompt itself, by typing docker
followed by the
Enter key. All commands are listed with their basic descriptions. To get more details on any specific subcommand, use its help
menu, as follows:
$ docker rmi --help
There's more…
Docker images can be used to quickly create runc
containers, as follows:
$ sudo apt-get install runc $ mkdir -p runc/rootfs && cd runc $ docker run --name alpine alpine sh $ docker export alpine > alpine.tar $ tar -xf alpine.tar -C rootfs $ runc spec $ sudo runc start alpine
See also
- Docker run documentation: http://docs.docker.com/engine/reference/commandline/run/
- Check manual entries for any Docker command:
$ man docker create
Getting ready
Make sure that you have installed Docker and set your user as a member of the Docker group.
You may need sudo
privileges for some commands.
How to do it…
Let's create a new Docker container and start it. With Docker, you can quickly start a container with the docker run
command:
- Start a new Docker container with the following command:
$ docker run -it --name dc1 ubuntu /bin/bash Unable to find image 'ubuntu:trusty' locally trusty: Pulling from library/ubuntu 6599cadaf950: Pull complete 23eda618d451: Pull complete ... Status: Downloaded newer image for ubuntu:trusty root@bd8c99397e52:/#
Once a container has been started, it will drop you in a new shell running inside it. From here, you can execute limited Ubuntu or general Linux commands, which will be executed inside the container.
- When you are done with the container, you can exit from the shell by typing
exit
or pressing Ctrl + D. This will terminate your shell and stop the container as well. - Use the
docker ps
command to list all the containers and check the status of your last container:$ docker ps -a
By default,
docker ps
lists all running containers. As our container is no longer running, we need to use the-a
flag to list all available containers. - To start the container again, you can use the
docker start
command. You can use the container name or ID to specify the container to be started:$ docker start -ia dc1
The
-i
flag will start the container in interactive mode and the-a
flag will attach to a terminal inside the container. To start a container in detached mode, use thestart
command without any flags. This will start the container in the background and return to the host shell:$ docker start dc1
- You can open a terminal inside a detached container with
docker attach
:$ docker attach dc1
- Now, to detach a terminal and keep the container running, you need the key combinations Ctrl + P and Ctrl + Q. Alternatively, you can type
exit
or press Ctrl + C to exit the terminal and stop the container. - To get all the details of a container, use the
docker inspect
command with the name or ID of the container:$ docker inspect dc1 | less
This command will list all the details of the container, including container status, network status and address, and container configuration files.
Tip
Use
grep
to filter container information. For example, to get the IP address from thedocker inspect
output, use this:$ docker inspect dc1 | grep-iipaddr
- To execute a command inside a container, use
docker exec
. For example, the following command gets the environment variables from thedc1
container:$ docker exec dc1 env
This one gets the IP address of a container:
$ docker exec dc1 ifconfig
- To get the processes running inside a container, use the
docker top
command:$ docker top dc1
- Finally, to stop the container, use
docker stop
, which will gracefully stop the container after stopping processes running inside it:$ docker stop dc1
- When you no longer need the container, you can use
docker rm
to remove/delete it:$ docker rm dc1
Tip
Want to remove all stopped containers with a single command? Use this:
$ docker rm $(dockerps -aq)
How it works…
We started our first Docker container with the docker run
command. With this command, we instructed the Docker daemon to start a new container with an image called Ubuntu, start an interactive session (-i
), and allocate a terminal (-t
). We also elected to name our container with the --name
flag and execute the /bin/bash
command inside a container once it started.
The Docker daemon will search for Ubuntu images in the local cache or download the image from Docker Hub if the specified image is not available in the local cache. Docker Hub is a central Docker image repository. It will take some time to download and extract all the layers of the images. Docker maintains container images in the form of multiple layers. These layers can be shared across multiple container images. For example, if you have Ubuntu running on a server and you need to download the Apache container based on Ubuntu, Docker will only download the additional layer for Apache as it already has Ubuntu in the local cache, which can be reused.
Docker provides various other commands to manage containers and images. We have already used a few of them in the previous example. You can get the full list of all available commands from the command prompt itself, by typing docker
followed by the
Enter key. All commands are listed with their basic descriptions. To get more details on any specific subcommand, use its help
menu, as follows:
$ docker rmi --help
There's more…
Docker images can be used to quickly create runc
containers, as follows:
$ sudo apt-get install runc $ mkdir -p runc/rootfs && cd runc $ docker run --name alpine alpine sh $ docker export alpine > alpine.tar $ tar -xf alpine.tar -C rootfs $ runc spec $ sudo runc start alpine
See also
- Docker run documentation: http://docs.docker.com/engine/reference/commandline/run/
- Check manual entries for any Docker command:
$ man docker create
How to do it…
Let's create a new Docker container and start it. With Docker, you can quickly start a container with the docker run
command:
- Start a new Docker container with the following command:
$ docker run -it --name dc1 ubuntu /bin/bash Unable to find image 'ubuntu:trusty' locally trusty: Pulling from library/ubuntu 6599cadaf950: Pull complete 23eda618d451: Pull complete ... Status: Downloaded newer image for ubuntu:trusty root@bd8c99397e52:/#
Once a container has been started, it will drop you in a new shell running inside it. From here, you can execute limited Ubuntu or general Linux commands, which will be executed inside the container.
- When you are done with the container, you can exit from the shell by typing
exit
or pressing Ctrl + D. This will terminate your shell and stop the container as well. - Use the
docker ps
command to list all the containers and check the status of your last container:$ docker ps -a
By default,
docker ps
lists all running containers. As our container is no longer running, we need to use the-a
flag to list all available containers. - To start the container again, you can use the
docker start
command. You can use the container name or ID to specify the container to be started:$ docker start -ia dc1
The
-i
flag will start the container in interactive mode and the-a
flag will attach to a terminal inside the container. To start a container in detached mode, use thestart
command without any flags. This will start the container in the background and return to the host shell:$ docker start dc1
- You can open a terminal inside a detached container with
docker attach
:$ docker attach dc1
- Now, to detach a terminal and keep the container running, you need the key combinations Ctrl + P and Ctrl + Q. Alternatively, you can type
exit
or press Ctrl + C to exit the terminal and stop the container. - To get all the details of a container, use the
docker inspect
command with the name or ID of the container:$ docker inspect dc1 | less
This command will list all the details of the container, including container status, network status and address, and container configuration files.
Tip
Use
grep
to filter container information. For example, to get the IP address from thedocker inspect
output, use this:$ docker inspect dc1 | grep-iipaddr
- To execute a command inside a container, use
docker exec
. For example, the following command gets the environment variables from thedc1
container:$ docker exec dc1 env
This one gets the IP address of a container:
$ docker exec dc1 ifconfig
- To get the processes running inside a container, use the
docker top
command:$ docker top dc1
- Finally, to stop the container, use
docker stop
, which will gracefully stop the container after stopping processes running inside it:$ docker stop dc1
- When you no longer need the container, you can use
docker rm
to remove/delete it:$ docker rm dc1
Tip
Want to remove all stopped containers with a single command? Use this:
$ docker rm $(dockerps -aq)
How it works…
We started our first Docker container with the docker run
command. With this command, we instructed the Docker daemon to start a new container with an image called Ubuntu, start an interactive session (-i
), and allocate a terminal (-t
). We also elected to name our container with the --name
flag and execute the /bin/bash
command inside a container once it started.
The Docker daemon will search for Ubuntu images in the local cache or download the image from Docker Hub if the specified image is not available in the local cache. Docker Hub is a central Docker image repository. It will take some time to download and extract all the layers of the images. Docker maintains container images in the form of multiple layers. These layers can be shared across multiple container images. For example, if you have Ubuntu running on a server and you need to download the Apache container based on Ubuntu, Docker will only download the additional layer for Apache as it already has Ubuntu in the local cache, which can be reused.
Docker provides various other commands to manage containers and images. We have already used a few of them in the previous example. You can get the full list of all available commands from the command prompt itself, by typing docker
followed by the
Enter key. All commands are listed with their basic descriptions. To get more details on any specific subcommand, use its help
menu, as follows:
$ docker rmi --help
There's more…
Docker images can be used to quickly create runc
containers, as follows:
$ sudo apt-get install runc $ mkdir -p runc/rootfs && cd runc $ docker run --name alpine alpine sh $ docker export alpine > alpine.tar $ tar -xf alpine.tar -C rootfs $ runc spec $ sudo runc start alpine
See also
- Docker run documentation: http://docs.docker.com/engine/reference/commandline/run/
- Check manual entries for any Docker command:
$ man docker create
How it works…
We started our first Docker container with the docker run
command. With this command, we instructed the Docker daemon to start a new container with an image called Ubuntu, start an interactive session (-i
), and allocate a terminal (-t
). We also elected to name our container with the --name
flag and execute the /bin/bash
command inside a container once it started.
The Docker daemon will search for Ubuntu images in the local cache or download the image from Docker Hub if the specified image is not available in the local cache. Docker Hub is a central Docker image repository. It will take some time to download and extract all the layers of the images. Docker maintains container images in the form of multiple layers. These layers can be shared across multiple container images. For example, if you have Ubuntu running on a server and you need to download the Apache container based on Ubuntu, Docker will only download the additional layer for Apache as it already has Ubuntu in the local cache, which can be reused.
Docker provides various other commands to manage containers and images. We have already used a few of them in the previous example. You can get the full list of all available commands from the command prompt itself, by typing docker
followed by the
Enter key. All commands are listed with their basic descriptions. To get more details on any specific subcommand, use its help
menu, as follows:
$ docker rmi --help
There's more…
Docker images can be used to quickly create runc
containers, as follows:
$ sudo apt-get install runc $ mkdir -p runc/rootfs && cd runc $ docker run --name alpine alpine sh $ docker export alpine > alpine.tar $ tar -xf alpine.tar -C rootfs $ runc spec $ sudo runc start alpine
See also
- Docker run documentation: http://docs.docker.com/engine/reference/commandline/run/
- Check manual entries for any Docker command:
$ man docker create
There's more…
Docker images can be used to quickly create runc
containers, as follows:
$ sudo apt-get install runc $ mkdir -p runc/rootfs && cd runc $ docker run --name alpine alpine sh $ docker export alpine > alpine.tar $ tar -xf alpine.tar -C rootfs $ runc spec $ sudo runc start alpine
See also
- Docker run documentation: http://docs.docker.com/engine/reference/commandline/run/
- Check manual entries for any Docker command:
$ man docker create
See also
- Docker run documentation: http://docs.docker.com/engine/reference/commandline/run/
- Check manual entries for any Docker command:
$ man docker create
Creating images with a Dockerfile
This recipe explores image creation with Dockerfiles. Docker images can be created in multiple ways, which includes using Dockerfiles, using docker commit
to save the container state as a new image, or using docker import
, which imports chroot directory structure as a Docker image.
In this recipe, we will focus on
Dockerfiles and related details. Dockerfiles help in automating identical and repeatable image creation. They contain multiple commands in the form of instructions to build a new image. These instructions are then passed to the Docker daemon through the docker build
command. The Docker daemon independently executes these commands one by one. The resulting images are committed as and when necessary, and it is possible that multiple intermediate images are created. The build process will reuse existing images from the image cache to speed up build process.
Getting ready
Make sure that your Docker daemon is installed and working properly.
How to do it…
- First, create a new empty directory and enter it. This directory will hold our Dockerfile:
$ mkdir myimage $ cd myimage
- Create a new file called
Dockerfile
:$ touch Dockerfile
- Now, add the following lines to the newly created file. These lines are the instructions to create an image with the Apache web server. We will look at more details later in this recipe:
FROM ubuntu:trusty MAINTAINER ubuntu server cookbook # Install base packages RUN apt-get update && apt-get -yq install apache2 && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf ENV APACHE_RUN_USER www-data ENV APACHE_RUN_GROUP www-data ENV APACHE_LOG_DIR /var/log/apache2 ENV APACHE_PID_FILE /var/run/apache2.pid ENV APACHE_LOCK_DIR /var/www/html VOLUME ["/var/www/html"] EXPOSE 80 CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
- Save the changes and start the
docker build
process with the following command:$ docker build.
This will build a new image with Apache server installed on it. The build process will take a little longer to complete and output the final image ID:
- Once the image is ready, you can start a new container with it:
$ docker run -p 80:80 -d image_id
Replace
image_id
with the image ID from the result of the build process. - Now, you can list the running containers with the
docker ps
command. Notice theports
column of the output:$ docker ps
Apache server's default page should be accessible at your host domain name or IP address.
How it works…
A Dockerfile is a document that contains several commands to create a new image. Each command in a Dockerfile creates a new container, executes that command on the new container, and then commits the changes to create a new image. This image is then used as a base for executing the next command. Once the final command is executed, Docker returns the ID of the final image as an output of the docker build
command.
This recipe demonstrates the use of a Dockerfile to create images with the Apache web server. The Dockerfile uses a few available instructions. As a convention, the instructions file is generally called Dockerfile. Alternatively, you can use the -f
flag to pass the instruction file to the Docker daemon. A Dockerfile uses the following format for instructions:
# comment INSTRUCTION argument
All instructions are executed one by one in a given order. A Dockerfile must start with the FROM
instruction, which specifies the base image to be used. We have started our Dockerfile with Ubuntu:trusty as the base image. The next line specifies the maintainer or the author of the Dockerfile, with the MAINTAINER
instruction.
Followed by the author definition, we have used the RUN
instruction to install Apache on our base image. The RUN
instruction will execute a given command on the top read-write layer and then commit the results. The committed image will be used as a starting point for the next instruction. If you've noticed the RUN
instruction and the arguments passed to it, you can see that we have passed multiple commands in a chained format. This will execute all commands on a single image and avoid any cache-related problems. The apt-get clean
and rm
commands are used to remove any unused files and minimize the resulting image size.
After the RUN
command, we have set some environment variables with the ENV
instruction. When we start a new container from this image, all environment variables are exported to the container environment and will be accessible to processes running inside the container. In this case, the process that will use such a variable is the Apache server.
Next, we have used the VOLUME
instruction with the path set to /var/www/html
. This instruction creates a directory on the host system, generally under Docker root, and mounts it inside the container on the specified path. Docker uses volumes to decouple containers from the data they create. So even if the container using this volume is removed, the data will persist on the host system. You can specify volumes in a Dockerfile or in the command line while running the container, as follows:
$ docker run -v /var/www/html image_id
You can use docker inspect
to get the host path of the volumes attached to container.
Finally, we have used the EXPOSE
instruction, which will expose the specified container port to the host. In this case, it's port 80
, where the Apache server will be listening for web requests. To use an exposed port on the host system, we need to use either the -p
flag to explicitly specify the port mapping or the -P
flag, which will dynamically map the container port to the available host port. We have used the -p
flag with the argument 80:80
, which will map the container port 80
to the host port 80
and make Apache accessible through the host.
The last instruction, CMD
, sets the command to be executed when running the image. We are using the executable format of the CMD
instruction, which specifies the executable to be run with its command-line arguments. In this case, our executable is the Apache binary with -D FOREGROUND
as an argument. By default, the Apache parent process will start, create a child process, and then exit. If the Apache process exits, our container will be turned off as it no longer has a running process. With the -D FOREGROUND
argument, we instruct Apache to run in the foreground and keep the parent process active. We can have only one CMD
instruction in a Dockerfile.
The instruction set includes some more instructions, such as ADD
, COPY
, and ENTRYPOINT
. I cannot cover them all because it would run into far too many pages. You can always refer to the official Docker site to get more details. Check out the reference URLs in the See also section.
There's more…
Once the image has been created, you can share it on Docker Hub, a central repository of public and private Docker images. You need an account on Docker Hub, which can be created for free. Once you get your Docker Hub credentials, you can use docker login
to connect your Docker daemon with Docker Hub and then use docker push
to push local images to the Docker Hub repository. You can use the respective help
commands or manual pages to get more details about docker login
and docker push
.
Alternatively, you can also set up your own local image repository. Check out the Docker documents for deploying your own registry at https://docs.docker.com/registry/deploying/.
Note
GitLab, an open source Git hosting server, now supports container repositories. This feature has been added in GitLab version 8.8. Refer to Chapter 11, Git Hosting, for more details and installation instructions for GitLab.
We need a base image or any other image as a starting point for the Dockerfile. But how do we create our own base image?
Base images can be created with tools such as
debootstrap and supermin. We need to create a distribution-specific directory structure and put all the necessary files inside it. Later, we can create a tarball of this directory structure and import the tarball as a Docker image using the docker import
command.
See also
- Dockerfile reference: https://docs.docker.com/reference/builder/
- Dockerfile best practices: https://docs.docker.com/articles/dockerfile_best-practices
- More Dockerfile best practices: http://crosbymichael.com/dockerfile-best-practices.html">
- Create a base image: http://docs.docker.com/engine/articles/baseimages/
Getting ready
Make sure that your Docker daemon is installed and working properly.
How to do it…
- First, create a new empty directory and enter it. This directory will hold our Dockerfile:
$ mkdir myimage $ cd myimage
- Create a new file called
Dockerfile
:$ touch Dockerfile
- Now, add the following lines to the newly created file. These lines are the instructions to create an image with the Apache web server. We will look at more details later in this recipe:
FROM ubuntu:trusty MAINTAINER ubuntu server cookbook # Install base packages RUN apt-get update && apt-get -yq install apache2 && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf ENV APACHE_RUN_USER www-data ENV APACHE_RUN_GROUP www-data ENV APACHE_LOG_DIR /var/log/apache2 ENV APACHE_PID_FILE /var/run/apache2.pid ENV APACHE_LOCK_DIR /var/www/html VOLUME ["/var/www/html"] EXPOSE 80 CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
- Save the changes and start the
docker build
process with the following command:$ docker build.
This will build a new image with Apache server installed on it. The build process will take a little longer to complete and output the final image ID:
- Once the image is ready, you can start a new container with it:
$ docker run -p 80:80 -d image_id
Replace
image_id
with the image ID from the result of the build process. - Now, you can list the running containers with the
docker ps
command. Notice theports
column of the output:$ docker ps
Apache server's default page should be accessible at your host domain name or IP address.
How it works…
A Dockerfile is a document that contains several commands to create a new image. Each command in a Dockerfile creates a new container, executes that command on the new container, and then commits the changes to create a new image. This image is then used as a base for executing the next command. Once the final command is executed, Docker returns the ID of the final image as an output of the docker build
command.
This recipe demonstrates the use of a Dockerfile to create images with the Apache web server. The Dockerfile uses a few available instructions. As a convention, the instructions file is generally called Dockerfile. Alternatively, you can use the -f
flag to pass the instruction file to the Docker daemon. A Dockerfile uses the following format for instructions:
# comment INSTRUCTION argument
All instructions are executed one by one in a given order. A Dockerfile must start with the FROM
instruction, which specifies the base image to be used. We have started our Dockerfile with Ubuntu:trusty as the base image. The next line specifies the maintainer or the author of the Dockerfile, with the MAINTAINER
instruction.
Followed by the author definition, we have used the RUN
instruction to install Apache on our base image. The RUN
instruction will execute a given command on the top read-write layer and then commit the results. The committed image will be used as a starting point for the next instruction. If you've noticed the RUN
instruction and the arguments passed to it, you can see that we have passed multiple commands in a chained format. This will execute all commands on a single image and avoid any cache-related problems. The apt-get clean
and rm
commands are used to remove any unused files and minimize the resulting image size.
After the RUN
command, we have set some environment variables with the ENV
instruction. When we start a new container from this image, all environment variables are exported to the container environment and will be accessible to processes running inside the container. In this case, the process that will use such a variable is the Apache server.
Next, we have used the VOLUME
instruction with the path set to /var/www/html
. This instruction creates a directory on the host system, generally under Docker root, and mounts it inside the container on the specified path. Docker uses volumes to decouple containers from the data they create. So even if the container using this volume is removed, the data will persist on the host system. You can specify volumes in a Dockerfile or in the command line while running the container, as follows:
$ docker run -v /var/www/html image_id
You can use docker inspect
to get the host path of the volumes attached to container.
Finally, we have used the EXPOSE
instruction, which will expose the specified container port to the host. In this case, it's port 80
, where the Apache server will be listening for web requests. To use an exposed port on the host system, we need to use either the -p
flag to explicitly specify the port mapping or the -P
flag, which will dynamically map the container port to the available host port. We have used the -p
flag with the argument 80:80
, which will map the container port 80
to the host port 80
and make Apache accessible through the host.
The last instruction, CMD
, sets the command to be executed when running the image. We are using the executable format of the CMD
instruction, which specifies the executable to be run with its command-line arguments. In this case, our executable is the Apache binary with -D FOREGROUND
as an argument. By default, the Apache parent process will start, create a child process, and then exit. If the Apache process exits, our container will be turned off as it no longer has a running process. With the -D FOREGROUND
argument, we instruct Apache to run in the foreground and keep the parent process active. We can have only one CMD
instruction in a Dockerfile.
The instruction set includes some more instructions, such as ADD
, COPY
, and ENTRYPOINT
. I cannot cover them all because it would run into far too many pages. You can always refer to the official Docker site to get more details. Check out the reference URLs in the See also section.
There's more…
Once the image has been created, you can share it on Docker Hub, a central repository of public and private Docker images. You need an account on Docker Hub, which can be created for free. Once you get your Docker Hub credentials, you can use docker login
to connect your Docker daemon with Docker Hub and then use docker push
to push local images to the Docker Hub repository. You can use the respective help
commands or manual pages to get more details about docker login
and docker push
.
Alternatively, you can also set up your own local image repository. Check out the Docker documents for deploying your own registry at https://docs.docker.com/registry/deploying/.
Note
GitLab, an open source Git hosting server, now supports container repositories. This feature has been added in GitLab version 8.8. Refer to Chapter 11, Git Hosting, for more details and installation instructions for GitLab.
We need a base image or any other image as a starting point for the Dockerfile. But how do we create our own base image?
Base images can be created with tools such as
debootstrap and supermin. We need to create a distribution-specific directory structure and put all the necessary files inside it. Later, we can create a tarball of this directory structure and import the tarball as a Docker image using the docker import
command.
See also
- Dockerfile reference: https://docs.docker.com/reference/builder/
- Dockerfile best practices: https://docs.docker.com/articles/dockerfile_best-practices
- More Dockerfile best practices: http://crosbymichael.com/dockerfile-best-practices.html">
- Create a base image: http://docs.docker.com/engine/articles/baseimages/
How to do it…
- First, create a new empty directory and enter it. This directory will hold our Dockerfile:
$ mkdir myimage $ cd myimage
- Create a new file called
Dockerfile
:$ touch Dockerfile
- Now, add the following lines to the newly created file. These lines are the instructions to create an image with the Apache web server. We will look at more details later in this recipe:
FROM ubuntu:trusty MAINTAINER ubuntu server cookbook # Install base packages RUN apt-get update && apt-get -yq install apache2 && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf ENV APACHE_RUN_USER www-data ENV APACHE_RUN_GROUP www-data ENV APACHE_LOG_DIR /var/log/apache2 ENV APACHE_PID_FILE /var/run/apache2.pid ENV APACHE_LOCK_DIR /var/www/html VOLUME ["/var/www/html"] EXPOSE 80 CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
- Save the changes and start the
docker build
process with the following command:$ docker build.
This will build a new image with Apache server installed on it. The build process will take a little longer to complete and output the final image ID:
- Once the image is ready, you can start a new container with it:
$ docker run -p 80:80 -d image_id
Replace
image_id
with the image ID from the result of the build process. - Now, you can list the running containers with the
docker ps
command. Notice theports
column of the output:$ docker ps
Apache server's default page should be accessible at your host domain name or IP address.
How it works…
A Dockerfile is a document that contains several commands to create a new image. Each command in a Dockerfile creates a new container, executes that command on the new container, and then commits the changes to create a new image. This image is then used as a base for executing the next command. Once the final command is executed, Docker returns the ID of the final image as an output of the docker build
command.
This recipe demonstrates the use of a Dockerfile to create images with the Apache web server. The Dockerfile uses a few available instructions. As a convention, the instructions file is generally called Dockerfile. Alternatively, you can use the -f
flag to pass the instruction file to the Docker daemon. A Dockerfile uses the following format for instructions:
# comment INSTRUCTION argument
All instructions are executed one by one in a given order. A Dockerfile must start with the FROM
instruction, which specifies the base image to be used. We have started our Dockerfile with Ubuntu:trusty as the base image. The next line specifies the maintainer or the author of the Dockerfile, with the MAINTAINER
instruction.
Followed by the author definition, we have used the RUN
instruction to install Apache on our base image. The RUN
instruction will execute a given command on the top read-write layer and then commit the results. The committed image will be used as a starting point for the next instruction. If you've noticed the RUN
instruction and the arguments passed to it, you can see that we have passed multiple commands in a chained format. This will execute all commands on a single image and avoid any cache-related problems. The apt-get clean
and rm
commands are used to remove any unused files and minimize the resulting image size.
After the RUN
command, we have set some environment variables with the ENV
instruction. When we start a new container from this image, all environment variables are exported to the container environment and will be accessible to processes running inside the container. In this case, the process that will use such a variable is the Apache server.
Next, we have used the VOLUME
instruction with the path set to /var/www/html
. This instruction creates a directory on the host system, generally under Docker root, and mounts it inside the container on the specified path. Docker uses volumes to decouple containers from the data they create. So even if the container using this volume is removed, the data will persist on the host system. You can specify volumes in a Dockerfile or in the command line while running the container, as follows:
$ docker run -v /var/www/html image_id
You can use docker inspect
to get the host path of the volumes attached to container.
Finally, we have used the EXPOSE
instruction, which will expose the specified container port to the host. In this case, it's port 80
, where the Apache server will be listening for web requests. To use an exposed port on the host system, we need to use either the -p
flag to explicitly specify the port mapping or the -P
flag, which will dynamically map the container port to the available host port. We have used the -p
flag with the argument 80:80
, which will map the container port 80
to the host port 80
and make Apache accessible through the host.
The last instruction, CMD
, sets the command to be executed when running the image. We are using the executable format of the CMD
instruction, which specifies the executable to be run with its command-line arguments. In this case, our executable is the Apache binary with -D FOREGROUND
as an argument. By default, the Apache parent process will start, create a child process, and then exit. If the Apache process exits, our container will be turned off as it no longer has a running process. With the -D FOREGROUND
argument, we instruct Apache to run in the foreground and keep the parent process active. We can have only one CMD
instruction in a Dockerfile.
The instruction set includes some more instructions, such as ADD
, COPY
, and ENTRYPOINT
. I cannot cover them all because it would run into far too many pages. You can always refer to the official Docker site to get more details. Check out the reference URLs in the See also section.
There's more…
Once the image has been created, you can share it on Docker Hub, a central repository of public and private Docker images. You need an account on Docker Hub, which can be created for free. Once you get your Docker Hub credentials, you can use docker login
to connect your Docker daemon with Docker Hub and then use docker push
to push local images to the Docker Hub repository. You can use the respective help
commands or manual pages to get more details about docker login
and docker push
.
Alternatively, you can also set up your own local image repository. Check out the Docker documents for deploying your own registry at https://docs.docker.com/registry/deploying/.
Note
GitLab, an open source Git hosting server, now supports container repositories. This feature has been added in GitLab version 8.8. Refer to Chapter 11, Git Hosting, for more details and installation instructions for GitLab.
We need a base image or any other image as a starting point for the Dockerfile. But how do we create our own base image?
Base images can be created with tools such as
debootstrap and supermin. We need to create a distribution-specific directory structure and put all the necessary files inside it. Later, we can create a tarball of this directory structure and import the tarball as a Docker image using the docker import
command.
See also
- Dockerfile reference: https://docs.docker.com/reference/builder/
- Dockerfile best practices: https://docs.docker.com/articles/dockerfile_best-practices
- More Dockerfile best practices: http://crosbymichael.com/dockerfile-best-practices.html">
- Create a base image: http://docs.docker.com/engine/articles/baseimages/
How it works…
A Dockerfile is a document that contains several commands to create a new image. Each command in a Dockerfile creates a new container, executes that command on the new container, and then commits the changes to create a new image. This image is then used as a base for executing the next command. Once the final command is executed, Docker returns the ID of the final image as an output of the docker build
command.
This recipe demonstrates the use of a Dockerfile to create images with the Apache web server. The Dockerfile uses a few available instructions. As a convention, the instructions file is generally called Dockerfile. Alternatively, you can use the -f
flag to pass the instruction file to the Docker daemon. A Dockerfile uses the following format for instructions:
# comment INSTRUCTION argument
All instructions are executed one by one in a given order. A Dockerfile must start with the FROM
instruction, which specifies the base image to be used. We have started our Dockerfile with Ubuntu:trusty as the base image. The next line specifies the maintainer or the author of the Dockerfile, with the MAINTAINER
instruction.
Followed by the author definition, we have used the RUN
instruction to install Apache on our base image. The RUN
instruction will execute a given command on the top read-write layer and then commit the results. The committed image will be used as a starting point for the next instruction. If you've noticed the RUN
instruction and the arguments passed to it, you can see that we have passed multiple commands in a chained format. This will execute all commands on a single image and avoid any cache-related problems. The apt-get clean
and rm
commands are used to remove any unused files and minimize the resulting image size.
After the RUN
command, we have set some environment variables with the ENV
instruction. When we start a new container from this image, all environment variables are exported to the container environment and will be accessible to processes running inside the container. In this case, the process that will use such a variable is the Apache server.
Next, we have used the VOLUME
instruction with the path set to /var/www/html
. This instruction creates a directory on the host system, generally under Docker root, and mounts it inside the container on the specified path. Docker uses volumes to decouple containers from the data they create. So even if the container using this volume is removed, the data will persist on the host system. You can specify volumes in a Dockerfile or in the command line while running the container, as follows:
$ docker run -v /var/www/html image_id
You can use docker inspect
to get the host path of the volumes attached to container.
Finally, we have used the EXPOSE
instruction, which will expose the specified container port to the host. In this case, it's port 80
, where the Apache server will be listening for web requests. To use an exposed port on the host system, we need to use either the -p
flag to explicitly specify the port mapping or the -P
flag, which will dynamically map the container port to the available host port. We have used the -p
flag with the argument 80:80
, which will map the container port 80
to the host port 80
and make Apache accessible through the host.
The last instruction, CMD
, sets the command to be executed when running the image. We are using the executable format of the CMD
instruction, which specifies the executable to be run with its command-line arguments. In this case, our executable is the Apache binary with -D FOREGROUND
as an argument. By default, the Apache parent process will start, create a child process, and then exit. If the Apache process exits, our container will be turned off as it no longer has a running process. With the -D FOREGROUND
argument, we instruct Apache to run in the foreground and keep the parent process active. We can have only one CMD
instruction in a Dockerfile.
The instruction set includes some more instructions, such as ADD
, COPY
, and ENTRYPOINT
. I cannot cover them all because it would run into far too many pages. You can always refer to the official Docker site to get more details. Check out the reference URLs in the See also section.
There's more…
Once the image has been created, you can share it on Docker Hub, a central repository of public and private Docker images. You need an account on Docker Hub, which can be created for free. Once you get your Docker Hub credentials, you can use docker login
to connect your Docker daemon with Docker Hub and then use docker push
to push local images to the Docker Hub repository. You can use the respective help
commands or manual pages to get more details about docker login
and docker push
.
Alternatively, you can also set up your own local image repository. Check out the Docker documents for deploying your own registry at https://docs.docker.com/registry/deploying/.
Note
GitLab, an open source Git hosting server, now supports container repositories. This feature has been added in GitLab version 8.8. Refer to Chapter 11, Git Hosting, for more details and installation instructions for GitLab.
We need a base image or any other image as a starting point for the Dockerfile. But how do we create our own base image?
Base images can be created with tools such as
debootstrap and supermin. We need to create a distribution-specific directory structure and put all the necessary files inside it. Later, we can create a tarball of this directory structure and import the tarball as a Docker image using the docker import
command.
See also
- Dockerfile reference: https://docs.docker.com/reference/builder/
- Dockerfile best practices: https://docs.docker.com/articles/dockerfile_best-practices
- More Dockerfile best practices: http://crosbymichael.com/dockerfile-best-practices.html">
- Create a base image: http://docs.docker.com/engine/articles/baseimages/
There's more…
Once the image has been created, you can share it on Docker Hub, a central repository of public and private Docker images. You need an account on Docker Hub, which can be created for free. Once you get your Docker Hub credentials, you can use docker login
to connect your Docker daemon with Docker Hub and then use docker push
to push local images to the Docker Hub repository. You can use the respective help
commands or manual pages to get more details about docker login
and docker push
.
Alternatively, you can also set up your own local image repository. Check out the Docker documents for deploying your own registry at https://docs.docker.com/registry/deploying/.
Note
GitLab, an open source Git hosting server, now supports container repositories. This feature has been added in GitLab version 8.8. Refer to Chapter 11, Git Hosting, for more details and installation instructions for GitLab.
We need a base image or any other image as a starting point for the Dockerfile. But how do we create our own base image?
Base images can be created with tools such as
debootstrap and supermin. We need to create a distribution-specific directory structure and put all the necessary files inside it. Later, we can create a tarball of this directory structure and import the tarball as a Docker image using the docker import
command.
See also
- Dockerfile reference: https://docs.docker.com/reference/builder/
- Dockerfile best practices: https://docs.docker.com/articles/dockerfile_best-practices
- More Dockerfile best practices: http://crosbymichael.com/dockerfile-best-practices.html">
- Create a base image: http://docs.docker.com/engine/articles/baseimages/
See also
- Dockerfile reference: https://docs.docker.com/reference/builder/
- Dockerfile best practices: https://docs.docker.com/articles/dockerfile_best-practices
- More Dockerfile best practices: http://crosbymichael.com/dockerfile-best-practices.html">
- Create a base image: http://docs.docker.com/engine/articles/baseimages/
Understanding Docker volumes
One of the most common questions seen on Docker forums is how to separate data from containers. This is because any data created inside containers is lost when the container gets deleted. Using docker commit
to store data inside Docker images is not a good idea. To solve this problem, Docker provides an option called data volumes. Data volumes are special shared directories that can be used by one or more Docker containers. These volumes persist even when the container is deleted. These directories are created on the host file system, usually under the /var/lib/docker/
directory.
In this recipe, we will learn to use Docker volumes, share host directories with Docker containers, and learn basic backup and restore tricks that can be used with containers.
Getting ready
Make sure that you have the Docker daemon installed and running. We will need two or more containers.
You may need sudo
privileges to access the /var/lib/docker
directory.
How to do it…
Follow these steps to understand Docker volumes:
- To add a data volume to a container, use the
-v
flag with thedocker run
command, like so:$ docker run -dP -v /var/lib/mysql --name mysql\ -e MYSQL_ROOT_PASSWORD= passwdmysql:latest
This will create a new MySQL container with a volume created at
/var/lib/mysql
inside the container. If the directory already exists on the volume path, the volume will overlay the directory contents. - Once the container has been started, you can get the host-specific path of the volume with the
docker inspect
command. Look for theMounts
section in the output ofdocker inspect
:$ docker inspect mysql
- To mount a specific directory from the host system as a data volume, use the following syntax:
$ mkdir ~/mkdir $ docker run -dP -v ~/mysql:/var/lib/mysql \ --name mysql mysql:latest
This will create a new directory named
mysql
at the home path and mount it as a volume inside a container at/var/lib/mysql
. - To share a volume between multiple containers, you can use named volume containers.
First, create a container with a volume attached to it. The following command will create a container with its name set to
mysql
:$ docker run -dP -v /var/lib/mysql --name mysql\ -e MYSQL_ROOT_PASSWORD= passwd mysql:latest
- Now, create a new container using the volume exposed by the
mysql
container and list all the files available in the container:$ docker run --rm --volumes-from mysql ubuntu ls -l /var/lib/mysql
- To back up data from the
mysql
container, use the following command:$ docker run --rm--volumes-from mysql -v ~/backup:/backup \ $ tar cvf /backup/mysql.tar /var/lib/mysql
- Docker volumes are not deleted when containers are removed. To delete volumes along with a container, you need to use the
-v
flag with thedocker rm
command:$ dockerrm -v mysql
How it works…
Docker volumes are designed to provide persistent storage, separate from the containers' life cycles. Even if the container gets deleted, the volume still persists unless it's explicitly specified to delete the volume with the container. Volumes can be attached while creating a container using the docker create
or docker run
commands. Both commands support the -v
flag, which accepts volume arguments. You can add multiple volumes by repeatedly using the volume flag. Volumes can also be created in a Dockerfile using the VOLUME
instruction.
When the -v
flag is followed by a simple directory path, Docker creates a new directory inside a container as a data volume. This data volume will be mapped to a directory on the host filesystem under the /var/lib/docker
directory. Docker volumes are read-write enabled by default, but you can mark a volume to be read-only using the following syntax:
$ docker run -dP -v /var/lib/mysql:ro --name mysql mysql:latest
Once a container has been created, you can get the details of all the volumes used by it, as well as its host-specific path, with the docker inspect
command. The Mounts
section from the output of docker inspect
lists all volumes with their respective names and paths on the host system and path inside a container.
Rather than using a random location as a data volume, you can also specify a particular directory on the host to be used as a data volume. Add a host directory along with the volume argument, and Docker will map the volume to that directory:
$ docker run -dP -v ~/mysql:/var/lib/mysql \ --name mysql mysql:latest
In this case, /var/lib/mysql
from the container will be mapped to the mysql
directory located at the user's home address.
Need to share a single file from a host system with a container? Sure, Docker supports that too. Use docker run -v
and specify the file source on the host and destination inside the container. Check out following example command:
$ docker run --rmd -v ~/.bash_history:/.bash_history ubuntu
The other option is to create a named data volume container or data-only container. You can create a named container with attached volumes and then use those volumes inside other containers using the docker run --volumes-from
command. The data volumes container need not be running to access volumes attached to it. These volumes can be shared by multiple containers, plus you can create temporary, throwaway application containers by separating persistent data storage. Even if you delete a temporary container using a named volume, your data is still safe with a volume container.
From Docker version 1.9 onwards, a separate command, docker volume
, is available to manage volumes. With this update, you can create and manage volumes separately from containers. Docker volumes support various backend drivers, including AUFS, OverlayFS, BtrFS, and ZFS. A simple command to create a new volume will be as follows:
$ docker volume create --name=myvolume $ docker run -v myvolume:/opt alpine sh
See also
- The Docker volumes guide: http://docs.docker.com/engine/userguide/dockervolumes/
- Clean up orphaned volumes with this script: https://github.com/chadoe/docker-cleanup-volumes
Getting ready
Make sure that you have the Docker daemon installed and running. We will need two or more containers.
You may need sudo
privileges to access the /var/lib/docker
directory.
How to do it…
Follow these steps to understand Docker volumes:
- To add a data volume to a container, use the
-v
flag with thedocker run
command, like so:$ docker run -dP -v /var/lib/mysql --name mysql\ -e MYSQL_ROOT_PASSWORD= passwdmysql:latest
This will create a new MySQL container with a volume created at
/var/lib/mysql
inside the container. If the directory already exists on the volume path, the volume will overlay the directory contents. - Once the container has been started, you can get the host-specific path of the volume with the
docker inspect
command. Look for theMounts
section in the output ofdocker inspect
:$ docker inspect mysql
- To mount a specific directory from the host system as a data volume, use the following syntax:
$ mkdir ~/mkdir $ docker run -dP -v ~/mysql:/var/lib/mysql \ --name mysql mysql:latest
This will create a new directory named
mysql
at the home path and mount it as a volume inside a container at/var/lib/mysql
. - To share a volume between multiple containers, you can use named volume containers.
First, create a container with a volume attached to it. The following command will create a container with its name set to
mysql
:$ docker run -dP -v /var/lib/mysql --name mysql\ -e MYSQL_ROOT_PASSWORD= passwd mysql:latest
- Now, create a new container using the volume exposed by the
mysql
container and list all the files available in the container:$ docker run --rm --volumes-from mysql ubuntu ls -l /var/lib/mysql
- To back up data from the
mysql
container, use the following command:$ docker run --rm--volumes-from mysql -v ~/backup:/backup \ $ tar cvf /backup/mysql.tar /var/lib/mysql
- Docker volumes are not deleted when containers are removed. To delete volumes along with a container, you need to use the
-v
flag with thedocker rm
command:$ dockerrm -v mysql
How it works…
Docker volumes are designed to provide persistent storage, separate from the containers' life cycles. Even if the container gets deleted, the volume still persists unless it's explicitly specified to delete the volume with the container. Volumes can be attached while creating a container using the docker create
or docker run
commands. Both commands support the -v
flag, which accepts volume arguments. You can add multiple volumes by repeatedly using the volume flag. Volumes can also be created in a Dockerfile using the VOLUME
instruction.
When the -v
flag is followed by a simple directory path, Docker creates a new directory inside a container as a data volume. This data volume will be mapped to a directory on the host filesystem under the /var/lib/docker
directory. Docker volumes are read-write enabled by default, but you can mark a volume to be read-only using the following syntax:
$ docker run -dP -v /var/lib/mysql:ro --name mysql mysql:latest
Once a container has been created, you can get the details of all the volumes used by it, as well as its host-specific path, with the docker inspect
command. The Mounts
section from the output of docker inspect
lists all volumes with their respective names and paths on the host system and path inside a container.
Rather than using a random location as a data volume, you can also specify a particular directory on the host to be used as a data volume. Add a host directory along with the volume argument, and Docker will map the volume to that directory:
$ docker run -dP -v ~/mysql:/var/lib/mysql \ --name mysql mysql:latest
In this case, /var/lib/mysql
from the container will be mapped to the mysql
directory located at the user's home address.
Need to share a single file from a host system with a container? Sure, Docker supports that too. Use docker run -v
and specify the file source on the host and destination inside the container. Check out following example command:
$ docker run --rmd -v ~/.bash_history:/.bash_history ubuntu
The other option is to create a named data volume container or data-only container. You can create a named container with attached volumes and then use those volumes inside other containers using the docker run --volumes-from
command. The data volumes container need not be running to access volumes attached to it. These volumes can be shared by multiple containers, plus you can create temporary, throwaway application containers by separating persistent data storage. Even if you delete a temporary container using a named volume, your data is still safe with a volume container.
From Docker version 1.9 onwards, a separate command, docker volume
, is available to manage volumes. With this update, you can create and manage volumes separately from containers. Docker volumes support various backend drivers, including AUFS, OverlayFS, BtrFS, and ZFS. A simple command to create a new volume will be as follows:
$ docker volume create --name=myvolume $ docker run -v myvolume:/opt alpine sh
See also
- The Docker volumes guide: http://docs.docker.com/engine/userguide/dockervolumes/
- Clean up orphaned volumes with this script: https://github.com/chadoe/docker-cleanup-volumes
How to do it…
Follow these steps to understand Docker volumes:
- To add a data volume to a container, use the
-v
flag with thedocker run
command, like so:$ docker run -dP -v /var/lib/mysql --name mysql\ -e MYSQL_ROOT_PASSWORD= passwdmysql:latest
This will create a new MySQL container with a volume created at
/var/lib/mysql
inside the container. If the directory already exists on the volume path, the volume will overlay the directory contents. - Once the container has been started, you can get the host-specific path of the volume with the
docker inspect
command. Look for theMounts
section in the output ofdocker inspect
:$ docker inspect mysql
- To mount a specific directory from the host system as a data volume, use the following syntax:
$ mkdir ~/mkdir $ docker run -dP -v ~/mysql:/var/lib/mysql \ --name mysql mysql:latest
This will create a new directory named
mysql
at the home path and mount it as a volume inside a container at/var/lib/mysql
. - To share a volume between multiple containers, you can use named volume containers.
First, create a container with a volume attached to it. The following command will create a container with its name set to
mysql
:$ docker run -dP -v /var/lib/mysql --name mysql\ -e MYSQL_ROOT_PASSWORD= passwd mysql:latest
- Now, create a new container using the volume exposed by the
mysql
container and list all the files available in the container:$ docker run --rm --volumes-from mysql ubuntu ls -l /var/lib/mysql
- To back up data from the
mysql
container, use the following command:$ docker run --rm--volumes-from mysql -v ~/backup:/backup \ $ tar cvf /backup/mysql.tar /var/lib/mysql
- Docker volumes are not deleted when containers are removed. To delete volumes along with a container, you need to use the
-v
flag with thedocker rm
command:$ dockerrm -v mysql
How it works…
Docker volumes are designed to provide persistent storage, separate from the containers' life cycles. Even if the container gets deleted, the volume still persists unless it's explicitly specified to delete the volume with the container. Volumes can be attached while creating a container using the docker create
or docker run
commands. Both commands support the -v
flag, which accepts volume arguments. You can add multiple volumes by repeatedly using the volume flag. Volumes can also be created in a Dockerfile using the VOLUME
instruction.
When the -v
flag is followed by a simple directory path, Docker creates a new directory inside a container as a data volume. This data volume will be mapped to a directory on the host filesystem under the /var/lib/docker
directory. Docker volumes are read-write enabled by default, but you can mark a volume to be read-only using the following syntax:
$ docker run -dP -v /var/lib/mysql:ro --name mysql mysql:latest
Once a container has been created, you can get the details of all the volumes used by it, as well as its host-specific path, with the docker inspect
command. The Mounts
section from the output of docker inspect
lists all volumes with their respective names and paths on the host system and path inside a container.
Rather than using a random location as a data volume, you can also specify a particular directory on the host to be used as a data volume. Add a host directory along with the volume argument, and Docker will map the volume to that directory:
$ docker run -dP -v ~/mysql:/var/lib/mysql \ --name mysql mysql:latest
In this case, /var/lib/mysql
from the container will be mapped to the mysql
directory located at the user's home address.
Need to share a single file from a host system with a container? Sure, Docker supports that too. Use docker run -v
and specify the file source on the host and destination inside the container. Check out following example command:
$ docker run --rmd -v ~/.bash_history:/.bash_history ubuntu
The other option is to create a named data volume container or data-only container. You can create a named container with attached volumes and then use those volumes inside other containers using the docker run --volumes-from
command. The data volumes container need not be running to access volumes attached to it. These volumes can be shared by multiple containers, plus you can create temporary, throwaway application containers by separating persistent data storage. Even if you delete a temporary container using a named volume, your data is still safe with a volume container.
From Docker version 1.9 onwards, a separate command, docker volume
, is available to manage volumes. With this update, you can create and manage volumes separately from containers. Docker volumes support various backend drivers, including AUFS, OverlayFS, BtrFS, and ZFS. A simple command to create a new volume will be as follows:
$ docker volume create --name=myvolume $ docker run -v myvolume:/opt alpine sh
See also
- The Docker volumes guide: http://docs.docker.com/engine/userguide/dockervolumes/
- Clean up orphaned volumes with this script: https://github.com/chadoe/docker-cleanup-volumes
How it works…
Docker volumes are designed to provide persistent storage, separate from the containers' life cycles. Even if the container gets deleted, the volume still persists unless it's explicitly specified to delete the volume with the container. Volumes can be attached while creating a container using the docker create
or docker run
commands. Both commands support the -v
flag, which accepts volume arguments. You can add multiple volumes by repeatedly using the volume flag. Volumes can also be created in a Dockerfile using the VOLUME
instruction.
When the -v
flag is followed by a simple directory path, Docker creates a new directory inside a container as a data volume. This data volume will be mapped to a directory on the host filesystem under the /var/lib/docker
directory. Docker volumes are read-write enabled by default, but you can mark a volume to be read-only using the following syntax:
$ docker run -dP -v /var/lib/mysql:ro --name mysql mysql:latest
Once a container has been created, you can get the details of all the volumes used by it, as well as its host-specific path, with the docker inspect
command. The Mounts
section from the output of docker inspect
lists all volumes with their respective names and paths on the host system and path inside a container.
Rather than using a random location as a data volume, you can also specify a particular directory on the host to be used as a data volume. Add a host directory along with the volume argument, and Docker will map the volume to that directory:
$ docker run -dP -v ~/mysql:/var/lib/mysql \ --name mysql mysql:latest
In this case, /var/lib/mysql
from the container will be mapped to the mysql
directory located at the user's home address.
Need to share a single file from a host system with a container? Sure, Docker supports that too. Use docker run -v
and specify the file source on the host and destination inside the container. Check out following example command:
$ docker run --rmd -v ~/.bash_history:/.bash_history ubuntu
The other option is to create a named data volume container or data-only container. You can create a named container with attached volumes and then use those volumes inside other containers using the docker run --volumes-from
command. The data volumes container need not be running to access volumes attached to it. These volumes can be shared by multiple containers, plus you can create temporary, throwaway application containers by separating persistent data storage. Even if you delete a temporary container using a named volume, your data is still safe with a volume container.
From Docker version 1.9 onwards, a separate command, docker volume
, is available to manage volumes. With this update, you can create and manage volumes separately from containers. Docker volumes support various backend drivers, including AUFS, OverlayFS, BtrFS, and ZFS. A simple command to create a new volume will be as follows:
$ docker volume create --name=myvolume $ docker run -v myvolume:/opt alpine sh
See also
- The Docker volumes guide: http://docs.docker.com/engine/userguide/dockervolumes/
- Clean up orphaned volumes with this script: https://github.com/chadoe/docker-cleanup-volumes
See also
- The Docker volumes guide: http://docs.docker.com/engine/userguide/dockervolumes/
- Clean up orphaned volumes with this script: https://github.com/chadoe/docker-cleanup-volumes
Deploying WordPress using a Docker network
In this recipe, we will learn to use a Docker network to set up a WordPress server. We will create two containers, one for MySQL and the other for WordPress. Additionally, we will set up a private network for both MySQL and WordPress.
How to do it…
Let's start by creating a separate network for WordPress and the MySQL containers:
- A new network can be created with the following command:
$ docker network create wpnet
- Check whether the network has been created successfully with
docker network ls
:$ docker network ls
- You can get details of the new network with the
docker network inspect
command:$ docker network inspect wpnet
- Next, start a new MySQL container and set it to use
wpnet
:$ docker run --name mysql -d \ -e MYSQL_ROOT_PASSWORD=password \ --net wpnet mysql
- Now, create a container for WordPress. Make sure the
WORDPRESS_DB_HOST
argument matches the name given to the MySQL container:$ docker run --name wordpress -d -p 80:80 \ --net wpnet\ -e WORDPRESS_DB_HOST=mysql\ -e WORDPRESS_DB_PASSWORD=password wordpress
- Inspect
wpnet
again. This time, it should list two containers:
Now, you can access the WordPress installation at your host domain name or IP address.
How it works…
Docker introduced the
container networking model (CNM) with Docker version 1.9. CNM enables users to create small, private networks for a group of containers. Now, you can set up a new software-assisted network with a simple docker network create
command. The Docker network supports bridge and overlay drivers for networks out of the box. You can use plugins to add other network drivers. The bridge network is a default driver used by a Docker network. It provides a network similar to the default Docker network, whereas an overlay network enables multihost networking for Docker clusters.
This recipe covers the use of a bridge network for wordpress containers. We have created a simple, isolated bridge network using the docker network
command. Once the network has been created, you can set containers to use this network with the --net
flag to docker run
command. If your containers are already running, you can add a new network interface to them with the docker network connect
command, as follows:
$ # docker network connect network_name container_name $ docker network connect wpnet mysql
Similarly, you can use docker network disconnect
to disconnect or remove a container from a specific network. Additionally, this network provides an inbuilt discovery feature. With discovery enabled, we can communicate with other containers using their names. We used this feature while connecting the MySQL container to the wordpress container. For the WORDPRESS_DB_HOST
parameter, we used the container name rather than the IP address or FQDN.
If you've noticed, we have not mentioned any port mapping for the mysql
container. With this new wpnet network, we need not create any port mapping on the MySQL container. The default MySQL port is exposed by the mysql
container and the service is accessible only to containers running on the wpnet network. The only port available to the outside world is port 80
from the wordpress container. We can easily hide the WordPress service behind a load balancer and use multiple wordpress containers with just the load balancer exposed to the outside world.
There's more…
Docker also supports links to create secure communication links between two or more containers. You can set up a WordPress site using linked containers as follows:
- First, create a
mysql
container:$ docker run --name mysql -d \ -e MYSQL_ROOT_PASSWORD=password mysql
- Now, create a
wordpress
container and link it with themysql
container:$ docker run --name wordpress -d -p 80:80 --link mysql:mysql
And you are done. All arguments for
wordpress
, such asDB_HOST
andROOT_PASSWORD
, will be taken from the linkedmysql
container.
The other option to set up WordPress is to set up both WordPress and MySQL in a single container. This needs process management tools such as supervisord to run two or more processes in a single container. Docker allows only one process per container by default.
See also
You can find the respective Dockerfiles for MySQL and WordPress containers at the following addresses:
- Docker Hub WordPress: https://hub.docker.com/_/wordpress/
- Docker Hub MySQL: https://hub.docker.com/_/mysql/
- Docker networking: https://blog.docker.com/2015/11/docker-multi-host-networking-ga/
- Networking for containers using libnetwork: https://github.com/docker/libnetwork
How to do it…
Let's start by creating a separate network for WordPress and the MySQL containers:
- A new network can be created with the following command:
$ docker network create wpnet
- Check whether the network has been created successfully with
docker network ls
:$ docker network ls
- You can get details of the new network with the
docker network inspect
command:$ docker network inspect wpnet
- Next, start a new MySQL container and set it to use
wpnet
:$ docker run --name mysql -d \ -e MYSQL_ROOT_PASSWORD=password \ --net wpnet mysql
- Now, create a container for WordPress. Make sure the
WORDPRESS_DB_HOST
argument matches the name given to the MySQL container:$ docker run --name wordpress -d -p 80:80 \ --net wpnet\ -e WORDPRESS_DB_HOST=mysql\ -e WORDPRESS_DB_PASSWORD=password wordpress
- Inspect
wpnet
again. This time, it should list two containers:
Now, you can access the WordPress installation at your host domain name or IP address.
How it works…
Docker introduced the
container networking model (CNM) with Docker version 1.9. CNM enables users to create small, private networks for a group of containers. Now, you can set up a new software-assisted network with a simple docker network create
command. The Docker network supports bridge and overlay drivers for networks out of the box. You can use plugins to add other network drivers. The bridge network is a default driver used by a Docker network. It provides a network similar to the default Docker network, whereas an overlay network enables multihost networking for Docker clusters.
This recipe covers the use of a bridge network for wordpress containers. We have created a simple, isolated bridge network using the docker network
command. Once the network has been created, you can set containers to use this network with the --net
flag to docker run
command. If your containers are already running, you can add a new network interface to them with the docker network connect
command, as follows:
$ # docker network connect network_name container_name $ docker network connect wpnet mysql
Similarly, you can use docker network disconnect
to disconnect or remove a container from a specific network. Additionally, this network provides an inbuilt discovery feature. With discovery enabled, we can communicate with other containers using their names. We used this feature while connecting the MySQL container to the wordpress container. For the WORDPRESS_DB_HOST
parameter, we used the container name rather than the IP address or FQDN.
If you've noticed, we have not mentioned any port mapping for the mysql
container. With this new wpnet network, we need not create any port mapping on the MySQL container. The default MySQL port is exposed by the mysql
container and the service is accessible only to containers running on the wpnet network. The only port available to the outside world is port 80
from the wordpress container. We can easily hide the WordPress service behind a load balancer and use multiple wordpress containers with just the load balancer exposed to the outside world.
There's more…
Docker also supports links to create secure communication links between two or more containers. You can set up a WordPress site using linked containers as follows:
- First, create a
mysql
container:$ docker run --name mysql -d \ -e MYSQL_ROOT_PASSWORD=password mysql
- Now, create a
wordpress
container and link it with themysql
container:$ docker run --name wordpress -d -p 80:80 --link mysql:mysql
And you are done. All arguments for
wordpress
, such asDB_HOST
andROOT_PASSWORD
, will be taken from the linkedmysql
container.
The other option to set up WordPress is to set up both WordPress and MySQL in a single container. This needs process management tools such as supervisord to run two or more processes in a single container. Docker allows only one process per container by default.
See also
You can find the respective Dockerfiles for MySQL and WordPress containers at the following addresses:
- Docker Hub WordPress: https://hub.docker.com/_/wordpress/
- Docker Hub MySQL: https://hub.docker.com/_/mysql/
- Docker networking: https://blog.docker.com/2015/11/docker-multi-host-networking-ga/
- Networking for containers using libnetwork: https://github.com/docker/libnetwork
How it works…
Docker introduced the
container networking model (CNM) with Docker version 1.9. CNM enables users to create small, private networks for a group of containers. Now, you can set up a new software-assisted network with a simple docker network create
command. The Docker network supports bridge and overlay drivers for networks out of the box. You can use plugins to add other network drivers. The bridge network is a default driver used by a Docker network. It provides a network similar to the default Docker network, whereas an overlay network enables multihost networking for Docker clusters.
This recipe covers the use of a bridge network for wordpress containers. We have created a simple, isolated bridge network using the docker network
command. Once the network has been created, you can set containers to use this network with the --net
flag to docker run
command. If your containers are already running, you can add a new network interface to them with the docker network connect
command, as follows:
$ # docker network connect network_name container_name $ docker network connect wpnet mysql
Similarly, you can use docker network disconnect
to disconnect or remove a container from a specific network. Additionally, this network provides an inbuilt discovery feature. With discovery enabled, we can communicate with other containers using their names. We used this feature while connecting the MySQL container to the wordpress container. For the WORDPRESS_DB_HOST
parameter, we used the container name rather than the IP address or FQDN.
If you've noticed, we have not mentioned any port mapping for the mysql
container. With this new wpnet network, we need not create any port mapping on the MySQL container. The default MySQL port is exposed by the mysql
container and the service is accessible only to containers running on the wpnet network. The only port available to the outside world is port 80
from the wordpress container. We can easily hide the WordPress service behind a load balancer and use multiple wordpress containers with just the load balancer exposed to the outside world.
There's more…
Docker also supports links to create secure communication links between two or more containers. You can set up a WordPress site using linked containers as follows:
- First, create a
mysql
container:$ docker run --name mysql -d \ -e MYSQL_ROOT_PASSWORD=password mysql
- Now, create a
wordpress
container and link it with themysql
container:$ docker run --name wordpress -d -p 80:80 --link mysql:mysql
And you are done. All arguments for
wordpress
, such asDB_HOST
andROOT_PASSWORD
, will be taken from the linkedmysql
container.
The other option to set up WordPress is to set up both WordPress and MySQL in a single container. This needs process management tools such as supervisord to run two or more processes in a single container. Docker allows only one process per container by default.
See also
You can find the respective Dockerfiles for MySQL and WordPress containers at the following addresses:
- Docker Hub WordPress: https://hub.docker.com/_/wordpress/
- Docker Hub MySQL: https://hub.docker.com/_/mysql/
- Docker networking: https://blog.docker.com/2015/11/docker-multi-host-networking-ga/
- Networking for containers using libnetwork: https://github.com/docker/libnetwork
There's more…
Docker also supports links to create secure communication links between two or more containers. You can set up a WordPress site using linked containers as follows:
- First, create a
mysql
container:$ docker run --name mysql -d \ -e MYSQL_ROOT_PASSWORD=password mysql
- Now, create a
wordpress
container and link it with themysql
container:$ docker run --name wordpress -d -p 80:80 --link mysql:mysql
And you are done. All arguments for
wordpress
, such asDB_HOST
andROOT_PASSWORD
, will be taken from the linkedmysql
container.
The other option to set up WordPress is to set up both WordPress and MySQL in a single container. This needs process management tools such as supervisord to run two or more processes in a single container. Docker allows only one process per container by default.
See also
You can find the respective Dockerfiles for MySQL and WordPress containers at the following addresses:
- Docker Hub WordPress: https://hub.docker.com/_/wordpress/
- Docker Hub MySQL: https://hub.docker.com/_/mysql/
- Docker networking: https://blog.docker.com/2015/11/docker-multi-host-networking-ga/
- Networking for containers using libnetwork: https://github.com/docker/libnetwork
See also
You can find the respective Dockerfiles for MySQL and WordPress containers at the following addresses:
- Docker Hub WordPress: https://hub.docker.com/_/wordpress/
- Docker Hub MySQL: https://hub.docker.com/_/mysql/
- Docker networking: https://blog.docker.com/2015/11/docker-multi-host-networking-ga/
- Networking for containers using libnetwork: https://github.com/docker/libnetwork
Monitoring Docker containers
In this recipe, we will learn to monitor Docker containers.
How to do it…
Docker provides inbuilt monitoring with the docker stats
command, which can be used to get a live stream of the resource utilization of Docker containers.
- To monitor multiple containers at once using their respective IDs or names, use this command:
$ docker stats mysql f9617f4b716c
Tip
If you need to monitor all running containers, use the following command:
$ docker stats $(dockerps -q)
- With
docker logs
, you can fetch logs of your application running inside a container. This can be used similarly to thetail -f
command:$ docker logs -f ubuntu
- Docker also records state change events from containers. These events include start, stop, create, kill, and so on. You can get real-time events with
docker events
:$ docker events
To get past events, use the
--since
flag withdocker events
:$ docker events --since '2015-11-01'
- You can also check the changes in the container filesystem with the
docker diff
command. This will list newly added (A), changed (C), or deleted (D) files.$ docker diff ubuntu
- Another useful command is
docker top
, which helps look inside a container. This commands displays the processes running inside a container:$ docker top ubuntu
How it works…
Docker provides various inbuilt commands to monitor containers and the processes running inside them. It uses native system constructs such as namespaces and cgroups. Most of these statistics are collected from the native system. Logs are directly collected from running processes.
Need something more, possibly a tool with graphical output? There are various such tools available. One well-known tool is cAdvisor by Google. You can run the tool itself as a Docker container, as follows:
docker run -d -p 8080:8080 --name cadvisor \ --volume=/:/rootfs:ro \ --volume=/var/run:/var/run:rw \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:ro \ google/cadvisor:latest
Once the container has been started, you can access the UI at your server domain or IP on port 8080
or any other port that you use. cAdvisor is able to monitor both LXC and Docker containers. In addition, it can report host system resources.
There's more…
Various external tools are available that provide monitoring and troubleshooting services. Sysdig is a similar command-line tool that can be used to monitor Linux systems and containers. Read some examples of using sysdig at https://github.com/draios/sysdig/wiki/Sysdig%20Examples.
Also, check out Sysdig Falco, an open source behavioral monitor with container support.
See also
- Docker runtime metrics at http://docs.docker.com/v1.8/articles/runmetrics/
- cAdvisor at GitHub: https://github.com/google/cadvisor
How to do it…
Docker provides inbuilt monitoring with the docker stats
command, which can be used to get a live stream of the resource utilization of Docker containers.
- To monitor multiple containers at once using their respective IDs or names, use this command:
$ docker stats mysql f9617f4b716c
Tip
If you need to monitor all running containers, use the following command:
$ docker stats $(dockerps -q)
- With
docker logs
, you can fetch logs of your application running inside a container. This can be used similarly to thetail -f
command:$ docker logs -f ubuntu
- Docker also records state change events from containers. These events include start, stop, create, kill, and so on. You can get real-time events with
docker events
:$ docker events
To get past events, use the
--since
flag withdocker events
:$ docker events --since '2015-11-01'
- You can also check the changes in the container filesystem with the
docker diff
command. This will list newly added (A), changed (C), or deleted (D) files.$ docker diff ubuntu
- Another useful command is
docker top
, which helps look inside a container. This commands displays the processes running inside a container:$ docker top ubuntu
How it works…
Docker provides various inbuilt commands to monitor containers and the processes running inside them. It uses native system constructs such as namespaces and cgroups. Most of these statistics are collected from the native system. Logs are directly collected from running processes.
Need something more, possibly a tool with graphical output? There are various such tools available. One well-known tool is cAdvisor by Google. You can run the tool itself as a Docker container, as follows:
docker run -d -p 8080:8080 --name cadvisor \ --volume=/:/rootfs:ro \ --volume=/var/run:/var/run:rw \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:ro \ google/cadvisor:latest
Once the container has been started, you can access the UI at your server domain or IP on port 8080
or any other port that you use. cAdvisor is able to monitor both LXC and Docker containers. In addition, it can report host system resources.
There's more…
Various external tools are available that provide monitoring and troubleshooting services. Sysdig is a similar command-line tool that can be used to monitor Linux systems and containers. Read some examples of using sysdig at https://github.com/draios/sysdig/wiki/Sysdig%20Examples.
Also, check out Sysdig Falco, an open source behavioral monitor with container support.
See also
- Docker runtime metrics at http://docs.docker.com/v1.8/articles/runmetrics/
- cAdvisor at GitHub: https://github.com/google/cadvisor
How it works…
Docker provides various inbuilt commands to monitor containers and the processes running inside them. It uses native system constructs such as namespaces and cgroups. Most of these statistics are collected from the native system. Logs are directly collected from running processes.
Need something more, possibly a tool with graphical output? There are various such tools available. One well-known tool is cAdvisor by Google. You can run the tool itself as a Docker container, as follows:
docker run -d -p 8080:8080 --name cadvisor \ --volume=/:/rootfs:ro \ --volume=/var/run:/var/run:rw \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:ro \ google/cadvisor:latest
Once the container has been started, you can access the UI at your server domain or IP on port 8080
or any other port that you use. cAdvisor is able to monitor both LXC and Docker containers. In addition, it can report host system resources.
There's more…
Various external tools are available that provide monitoring and troubleshooting services. Sysdig is a similar command-line tool that can be used to monitor Linux systems and containers. Read some examples of using sysdig at https://github.com/draios/sysdig/wiki/Sysdig%20Examples.
Also, check out Sysdig Falco, an open source behavioral monitor with container support.
See also
- Docker runtime metrics at http://docs.docker.com/v1.8/articles/runmetrics/
- cAdvisor at GitHub: https://github.com/google/cadvisor
There's more…
Various external tools are available that provide monitoring and troubleshooting services. Sysdig is a similar command-line tool that can be used to monitor Linux systems and containers. Read some examples of using sysdig at https://github.com/draios/sysdig/wiki/Sysdig%20Examples.
Also, check out Sysdig Falco, an open source behavioral monitor with container support.
See also
- Docker runtime metrics at http://docs.docker.com/v1.8/articles/runmetrics/
- cAdvisor at GitHub: https://github.com/google/cadvisor
See also
- Docker runtime metrics at http://docs.docker.com/v1.8/articles/runmetrics/
- cAdvisor at GitHub: https://github.com/google/cadvisor
Securing Docker containers
In this recipe, we will learn Docker configurations that may result in slightly improved security for your containers. Docker uses some advanced features in the latest Linux kernel, which include kernel namespaces to provide process isolation, control groups to control resource allocation, and kernel capabilities and user namespaces to run unprivileged containers. As stated on the Docker documentation page, Docker containers are, by default, quite secure.
This recipe covers some basic steps to improve Docker security and reduce the attack surface on the Ubuntu host as well as the Docker daemon.
How to do it…
The first and most common thing is to use the latest versions of your software. Make sure that you are using the latest Ubuntu version with all security updates applied and that your Docker version is the latest stable version:
- Upgrade your Ubuntu host with the following commands:
$ sudo apt-get update $ sudo apt-get upgrade
- If you used a Docker-maintained repository when installing Docker, you need not care about Docker updates, as the previous commands will update your Docker installation as well.
- Set a proper firewall on your host system. Ubuntu comes preinstalled with UFW; you simply need to add the necessary rules and enable the firewall. Refer to Chapter 2, Networking for more details on UFW configuration.
On Ubuntu systems, Docker ships with the AppArmor profile. This profile is installed and enforced with a Docker installation. Make sure you have AppArmor installed and working properly. AppArmor will provide better security against unknown vulnerabilities:
$ sudo apparmor_status
- Next, we will move on to configure the Docker daemon. You can get a list of all available options with the
docker daemon --help
command:$ docker daemon --help
- You can configure these settings in the Docker configuration file at
/etc/default/docker
, or start the Docker daemon with all required settings from the command line. - Edit the Docker configuration and add the following settings to the
DOCKER_OPTS
section:$ sudo nano /etc/default/docker
- Turn off inter-container communication:
--icc=false
- Set default ulimit restrictions:
--default-ulimitnproc=512:1024 --default-ulimitnofile=50:100
- Set the default storage driver to overlayfs:
---storage-driver=overlay
- Once you have configured all these settings, restart the Docker daemon:
$ sudo service docker restart
- Now, you can use the security bench script provided by Docker. This script checks for common security best practices and gives you a list of all the things that need to be improved.
- Clone the script from the Docker GitHub repository:
$ git clone https://github.com/docker/docker-bench-security.git
- Execute the script:
$ cd docker-bench-security $ sh docker-bench-security.sh
Try to fix the issues reported by this script.
- Now, we will look at Docker container configurations.
The most important part of a Docker container is its image. Make sure that you download or pull the images from a trusted repository. You can get most of the images from the official Docker repository, Docker Hub.
- Alternatively, you can build the images on your own server. Dockerfiles for the most popular images are quite easily available and you can easily build images after verifying their contents and making any changes if required.
When building your own images, make sure you don't add the root user:
RUN group add -r user && user add -r -g user user USER user
- When creating a new container, make sure that you configure CPU and memory limits as per the containers requirements. You can also pass container-specific ulimit settings when creating containers:
$ docker run --cpu-shares1024 --memory 512 --cpuset-cpus 1
- Whenever possible, set your containers to read-only:
$ docker run --read-only
- Use read-only volumes:
$ docker run -v /shared/path:/container/path:ro ubuntu
- Try not to publish application ports. Use a private Docker network or Docker links when possible. For example, when setting up WordPress in the previous recipe, we used a Docker network and connected WordPress and MySQL without exposing MySQL ports.
- You can also publish ports to a specific container with its IP address. This may create problems when using multiple containers, but is good for a base setup:
$ docker run -p 127.0.0.1:3306:3306 mysql
See also
- Most of these recommendations are taken from the Docker security cheat sheet at https://github.com/konstruktoid/Docker/blob/master/Security/
- The Docker bench security script: https://github.com/docker/docker-bench-security
- The Docker security documentation: http://docs.docker.com/engine/articles/security/
How to do it…
The first and most common thing is to use the latest versions of your software. Make sure that you are using the latest Ubuntu version with all security updates applied and that your Docker version is the latest stable version:
- Upgrade your Ubuntu host with the following commands:
$ sudo apt-get update $ sudo apt-get upgrade
- If you used a Docker-maintained repository when installing Docker, you need not care about Docker updates, as the previous commands will update your Docker installation as well.
- Set a proper firewall on your host system. Ubuntu comes preinstalled with UFW; you simply need to add the necessary rules and enable the firewall. Refer to Chapter 2, Networking for more details on UFW configuration.
On Ubuntu systems, Docker ships with the AppArmor profile. This profile is installed and enforced with a Docker installation. Make sure you have AppArmor installed and working properly. AppArmor will provide better security against unknown vulnerabilities:
$ sudo apparmor_status
- Next, we will move on to configure the Docker daemon. You can get a list of all available options with the
docker daemon --help
command:$ docker daemon --help
- You can configure these settings in the Docker configuration file at
/etc/default/docker
, or start the Docker daemon with all required settings from the command line. - Edit the Docker configuration and add the following settings to the
DOCKER_OPTS
section:$ sudo nano /etc/default/docker
- Turn off inter-container communication:
--icc=false
- Set default ulimit restrictions:
--default-ulimitnproc=512:1024 --default-ulimitnofile=50:100
- Set the default storage driver to overlayfs:
---storage-driver=overlay
- Once you have configured all these settings, restart the Docker daemon:
$ sudo service docker restart
- Now, you can use the security bench script provided by Docker. This script checks for common security best practices and gives you a list of all the things that need to be improved.
- Clone the script from the Docker GitHub repository:
$ git clone https://github.com/docker/docker-bench-security.git
- Execute the script:
$ cd docker-bench-security $ sh docker-bench-security.sh
Try to fix the issues reported by this script.
- Now, we will look at Docker container configurations.
The most important part of a Docker container is its image. Make sure that you download or pull the images from a trusted repository. You can get most of the images from the official Docker repository, Docker Hub.
- Alternatively, you can build the images on your own server. Dockerfiles for the most popular images are quite easily available and you can easily build images after verifying their contents and making any changes if required.
When building your own images, make sure you don't add the root user:
RUN group add -r user && user add -r -g user user USER user
- When creating a new container, make sure that you configure CPU and memory limits as per the containers requirements. You can also pass container-specific ulimit settings when creating containers:
$ docker run --cpu-shares1024 --memory 512 --cpuset-cpus 1
- Whenever possible, set your containers to read-only:
$ docker run --read-only
- Use read-only volumes:
$ docker run -v /shared/path:/container/path:ro ubuntu
- Try not to publish application ports. Use a private Docker network or Docker links when possible. For example, when setting up WordPress in the previous recipe, we used a Docker network and connected WordPress and MySQL without exposing MySQL ports.
- You can also publish ports to a specific container with its IP address. This may create problems when using multiple containers, but is good for a base setup:
$ docker run -p 127.0.0.1:3306:3306 mysql
See also
- Most of these recommendations are taken from the Docker security cheat sheet at https://github.com/konstruktoid/Docker/blob/master/Security/
- The Docker bench security script: https://github.com/docker/docker-bench-security
- The Docker security documentation: http://docs.docker.com/engine/articles/security/
See also
- Most of these recommendations are taken from the Docker security cheat sheet at https://github.com/konstruktoid/Docker/blob/master/Security/
- The Docker bench security script: https://github.com/docker/docker-bench-security
- The Docker security documentation: http://docs.docker.com/engine/articles/security/