Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Learning Ansible 2
Learning Ansible 2

Learning Ansible 2: Learn everything you need to manage and handle your systems with ease with Ansible 2 using this comprehensive guide , Second Edition

eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Learning Ansible 2

Chapter 1. Getting Started with Ansible

ICT is often described as a fast-growing industry. I think the best quality of the ICT industry is not related to its ability to grow at a super high speed, but to its ability to revolutionize itself and the rest of the world at an astonishing speed.

Every 10 to 15 years there are major shifts in how this industry works and every shift solves problems that were very hard to manage up to that point, creating new challenges. Also, at every major shift, many best practices of the previous iteration are classified as anti-patterns and new best practices are created. Although it might appear that those changes are impossible to predict, this is not always true. Obviously, it is not possible to know exactly what changes will occur and when they will take place, but looking at companies with a large number of servers and many lines of code usually reveals what the next steps will be.

The current shift has already happened in big companies like Amazon Web Services, Facebook, and Google. It is the implementation of IT automation systems to create and manage servers.

In this chapter we will cover:

  • IT automation
  • What is Ansible?
  • The secure shell
  • Installing Ansible
  • Creating a test environment with QEMU and KVM
  • Version control system
  • Using Ansible with Git

IT automation

IT automation is in its larger sense—the processes and software that help with the management of the IT infrastructure (servers, networking, and storage). In the current shift, we are assisting to a huge implementation of such processes and software.

The history of IT automation

At the beginning of IT history, there were very few servers and a lot of people were needed to make them work properly, usually more than one person for each machine. Over the years, servers became more reliable and easier to manage so it was possible to have multiple servers managed by a single system administrator. In that period, the administrators manually installed the software, upgraded the software manually, and changed the configuration files manually. This was obviously a very labor-intensive and error-prone process, so many administrators started to implement scripts and other means to make their life easier. Those scripts were (usually) pretty complex and they did not scale very well.

In the early years of this century, data centers started to grow a lot due to companies' needs. Virtualization helped in keeping prices low and the fact that many of these services were web services, meant that many servers were very similar to each other. At this point, new tools were needed to substitute the scripts that were used before, the configuration management tools.

CFEngine was one of the first tools to demonstrate configuration management capabilities way back in the 1990s; more recently, there has been Puppet, Chef, and Salt, besides Ansible.

Advantages of IT automation

People often wonder if IT automation really brings enough advantages considering that implementing it has some direct and indirect costs. The main advantages of IT automation are:

  • Ability to provision machines quickly
  • Ability to recreate a machine from scratch in minutes
  • Ability to track any change performed on the infrastructure

For these reasons, it's possible to reduce the cost of managing the IT infrastructure by reducing the repetitive operations often performed by system administrators.

Disadvantages of IT automation

As with any other technology, IT automation does come with some disadvantages. From my point of view these are the biggest disadvantages:

  • Automating all of the small tasks that were once used to train new system administrators
  • If an error is performed, it will be propagated everywhere

The consequence of the first is that new ways to train junior system administrators will need to be implemented.

Limiting the possible damages of an error propagation

The second one is trickier. There are a lot of ways to limit this kind of damage, but none of those will prevent it completely. The following mitigation options are available:

  • Always have backups: Backups will not prevent you from nuking your machine; they will only make the restore process possible.
  • Always test your infrastructure code (playbooks/roles) in a non-production environment: Companies have developed different pipelines to deploy code and those usually include environments such as dev, test, staging, and production. Use the same pipeline to test your infrastructure code. If a buggy application reaches the production environment it could be a problem. If a buggy playbook reaches the production environment, it could be catastrophic.
  • Always peer-review your infrastructure code: Some companies have already introduced peer-reviews for the application code, but very few have introduced it for the infrastructure code. As I was saying in the previous point, I think infrastructure code is way more critical than application code, so you should always peer-review your infrastructure code, whether you do it for your application code or not.
  • Enable SELinux: SELinux is a security kernel module that is available on all Linux distributions (it is installed by default on Fedora, Red Hat Enterprise Linux, CentOS, Scientific Linux, and Unbreakable Linux). It allows you to limit users and process powers in a very granular way. I suggest using SELinux instead of other similar modules (such as AppArmor) because it is able to handle more situations and permissions. SELinux will prevent a huge amount of damage because, if correctly configured, it will prevent many dangerous commands from being executed.
  • Run the playbooks from a limited account: Even though user and privilege escalation schemes have been in UNIX code for more than 40 years, it seems as if not many companies use them. Using a limited user for all your playbooks, and escalating privileges only for commands that need higher privileges will help prevent you nuking a machine while trying to clean an application temporary folder.
  • Use horizontal privilege escalation: The sudo is a well-known command but is often used in its more dangerous form. The sudo command supports the '-u' parameter that will allow you to specify a user that you want to impersonate. If you have to change a file that is owned by another user, please do not escalate to root to do so, just escalate to that user. In Ansible, you can use the become_user parameter to achieve this.
  • When possible, don't run a playbook on all your machines at the same time: Staged deployments can help you detect a problem before it's too late. There are many problems that are not detectable in a dev, test, staging, and qa environment. The majority of them are related to load that is hard to emulate properly in those non-production environments. A new configuration you have just added to your Apache HTTPd or MySQL servers could be perfectly OK from a syntax point of view, but disastrous for your specific application under your production load. A staged deployment will allow you to test your new configuration on your actual load without risking downtime if something was wrong.
  • Avoid guessing commands and modifiers: A lot of system administrators will try to remember the right parameter and try to guess if they don't remember it exactly. I've done it too, a lot of times, but this is very risky. Checking the man page or the online documentation will usually take you less than two minutes and often, by reading the manual, you'll find interesting notes you did not know. Guessing modifiers is dangerous because you could be fooled by a non-standard modifier (that is, -v is not the verbose mode for grep and -h is not the help command for the MySQL CLI).
  • Avoid error-prone commands: Not all commands have been created equally. Some commands are (way) more dangerous than others. If you can assume a cat command safe, you have to assume that a dd command is dangerous, since dd perform copies and conversion of files and volumes. I've seen people using dd in scripts to transform DOS files to UNIX (instead of dos2unix) and many other, very dangerous, examples. Please, avoid such commands, because they could result in a huge disaster if something goes wrong.
  • Avoid unnecessary modifiers: If you need to delete a simple file, use rm ${file} not rm -rf ${file}. The latter is often performed by users that have learned that; "to be sure, always use rm -rf", because at some time in their past, they have had to delete a folder. This will prevent you from deleting an entire folder if the ${file} variable is set wrongly.
  • Always check what could happen if a variable is not set: If you want to delete the contents of a folder and you use the rm -rf ${folder}/* command, you are looking for trouble. If the ${folder} variable is not set for some reason, the shell will read a rm -rf /* command, which is deadly (considering the fact that the rm -rf / command will not work on the majority of current OSes because it requires a --no-preserve-root option, while rm -rf /* will work as expected). I'm using this specific command as an example because I have seen such situations: the variable was pulled from a database which, due to some maintenance work, was down and an empty string was assigned to that variable. What happened next is probably easy to guess. In case you cannot prevent using variables in dangerous places, at least check them to see if they are not empty before using them. This will not save you from every problem but may catch some of the most common ones.
  • Double check your redirections: Redirections (along with pipes) are the most powerful elements of Linux shells. They could also be very dangerous: a cat /dev/rand > /dev/sda command can destroy a disk even if a cat command is usually overlooked because it's not usually dangerous. Always double-check all commands that include a redirection.
  • Use specific modules wherever possible: In this list I've used shell commands because many people will try to use Ansible as if it's just a way to distribute them: it's not. Ansible provides a lot of modules and we'll see them in this book. They will help you create more readable, portable, and safe playbooks.

Types of IT automation

There are a lot of ways to classify IT automation systems, but by far the most important is related to how the configurations are propagated. Based on this, we can distinguish between agent-based systems and agent-less systems.

Agent-based systems

Agent-based systems have two different components: a server and a client called agent.

There is only one server and it contains all of the configuration for your whole environment, while the agents are as many as the machines in the environment.

Note

In some cases, more than one server could be present to ensure high availability, but treat it as if it's a single server, since they will all be configured in the same way.

Periodically, client will contact the server to see if a new configuration for its machine is present. If a new configuration is present, the client will download it and apply it.

Agent-less systems

In agent-less systems, no specific agent is present. Agent-less systems do not always respect the server/client paradigm, since it's possible to have multiple servers and even the same number of servers and clients . Communications are initialized by the server that will contact the client(s) using standard protocols (usually via SSH and PowerShell).

Agent-based versus Agent-less systems

Aside from the differences outlined above, there are other contrasting factors which arise because of those differences.

From a security standpoint, an agent-based system can be less secure. Since all machines have to be able to initiate a connection to the server machine, this machine could be attacked more easily than in an agent-less case where the machine is usually behind a firewall that will not accept any incoming connections.

From a performance point of view, agent-based systems run the risk of having the server saturated and therefore the roll-out could be slower. It also needs to be considered that, in a pure agent-based system, it is not possible to force-push an update immediately to a set of machines. It will have to wait until those machines check-in. For this reason, multiple agent-based systems have implemented out-of-bands wait to implement such feature. Tools such as Chef and Puppet are agent-based but can also run without a centralized server to scale a large number of machines, commonly called Serverless Chef and Masterless Puppet, respectively.

An agent-less system is easier to integrate in an infrastructure that is already present, since it will be seen by the clients as a normal SSH connection and therefore no additional configuration is needed.

What is Ansible?

Ansible is an agent-less IT automation tool developed in 2012 by Michael DeHaan, a former Red Hat associate. The Ansible design goals are for it to be: minimal, consistent, secure, highly reliable, and easy to learn. The Ansible company has recently been bought out by Red Hat and now operates as part of Red Hat, Inc.

Ansible primarily runs in push mode using SSH, but you can also run Ansible using ansible-pull, where you can install Ansible on each agent, download the playbooks locally, and run them on individual machines. If there is a large number of machines (large is a relative term; in our view, greater than 500 and requiring parallel updates), and you plan to deploy updates to the machines in parallel, this might be the right way to go about it.

Secure Shell (SSH)

Secure Shell (also known as SSH) is a network service that allows you to login and access a shell remotely in a fully encrypted connection. The SSH daemon is today, the standard for UNIX system administration, after having replaced the unencrypted telnet. The most frequently used implementation of the SSH protocol is OpenSSH.

In the last few months, Microsoft has shown an implementation (at the time of writing) of OpenSSH for Windows.

Since Ansible performs SSH connections and commands in the same way any other SSH client would do, no specific configuration has been applied to the OpenSSH server.

To speed up default SSH connections, you can always enable ControlPersist and the pipeline mode, which makes Ansible faster and secure.

Why Ansible?

We will try and compare Ansible with Puppet and Chef during the course of this book since many people have good experience with those tools. We will also point out specifically how Ansible would solve a problem compared to Chef or Puppet.

Ansible, as well as Puppet and Chef, are declarative in nature and are expected to move a machine to the desired state specified in the configuration. For example, in each of these tools, in order to start a service at a point in time and start it automatically on restart, you would need to write a declarative block or module; every time the tool runs on the machine, it will aspire to obtain the state defined in your playbook (Ansible), cookbook (Chef), or manifest (Puppet).

The difference in the toolset is minimal at a simple level but as more situations arise and the complexity increases, you will start finding differences between the different toolsets. In Puppet, you need to take care of the order, and the Puppet server will create the sequence of instructions to execute every time you run it on a different box. To exploit the power of Chef, you will need a good Ruby team. Your team needs to be good at the Ruby language to customize both Puppet and Chef, and there will be a bigger learning curve with both of the tools.

With Ansible, the case is different. It uses the simplicity of Chef when it comes to the order of execution, the top-to-bottom approach, and allows you to define the end state in YAML format, which makes the code extremely readable and easy for everyone, from development teams to operations teams, to pick up and make changes. In many cases, even without Ansible, operations teams are given playbook manuals to execute instructions from, whenever they face issues. Ansible mimics that behavior. Do not be surprised if you end up having your project manager change the code in Ansible and check it into Git because of its simplicity!

Installing Ansible

Installing Ansible is rather quick and simple. You can use the source code directly, by cloning it from the GitHub project (https://github.com/ansible/ansible), install it using your system's package manager, or use Python's package management tool (pip). You can use Ansible on any Windows, Mac, or UNIX-like system. Ansible doesn't require any databases and doesn't need any daemons running. This makes it easier to maintain Ansible versions and upgrade without any breaks.

We'd like to call the machine where we will install Ansible our Ansible workstation. Some people also refer to it as the command center.

Installing Ansible using the system's package manager

It is possible to install Ansible using the system's package manager and in my opinion this is the preferred option if your system's package manager ships at least Ansible 2.0. We will look into installing Ansible via Yum, Apt, Homebrew, and pip.

Installing via Yum

If you are running a Fedora system you can install Ansible directly, since from Fedora 22, Ansible 2.0+ is available in the official repositories. You can install it as follows:

$ sudo dnf install ansible

For RHEL and RHEL-based (CentOS, Scientific Linux, Unbreakable Linux) systems, versions 6 and 7 have Ansible 2.0+ available in the EPEL repository, so you should ensure that you have the EPEL repository enabled before installing Ansible as follows:

$ sudo yum install ansible

Note

On Cent 6 or RHEL 6, you have to run the command rpm -Uvh. Refer to http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm for instructions on how to install EPEL.

Installing via Apt

Ansible is available for Ubuntu and Debian. To install Ansible on those operating systems, use the following command:

$ sudo apt-get install ansible

Installing via Homebrew

You can install Ansible on Mac OS X using Homebrew, as follows:

$ brew update
$ brew install ansible

Installing via pip

You can install Ansible via pip. If you don't have pip installed on your system, install it. You can use pip to install Ansible on Windows too, using the following command line:

$ sudo easy_install pip

You can now install Ansible using pip, as follows:

$ sudo pip install ansible

Once you're done installing Ansible, run ansible --version to verify that it has been installed:

$ ansible --version

You will get the following output from the preceding command line:

ansible 2.0.2

Installing Ansible from source

In case the previous methods do not fit your use case, you can install Ansible directly from the source. Installing from source does not require any root permissions. Let's clone a repository and activate virtualenv, which is an isolated environment in Python where you can install packages without interfering with the system's Python packages. The command and the resulting output for the repository is as follows:

$ git clone git://github.com/ansible/ansible.git
Cloning into 'ansible'...
remote: Counting objects: 116403, done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 116403 (delta 3), reused 0 (delta 0), pack-reused 116384
Receiving objects: 100% (116403/116403), 40.80 MiB | 844.00 KiB/s, done.
Resolving deltas: 100% (69450/69450), done.
Checking connectivity... done.
$ cd ansible/
$ source ./hacking/env-setup
Setting up Ansible to run out of checkout...
PATH=/home/vagrant/ansible/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/vagrant/bin
PYTHONPATH=/home/vagrant/ansible/lib:
MANPATH=/home/vagrant/ansible/docs/man:
Remember, you may wish to specify your host file with -i
Done!

Ansible needs a couple of Python packages, which you can install using pip. If you don't have pip installed on your system, install it using the following command. If you don't have easy_install installed, you can install it using Python's setuptools package on Red Hat systems, or by using Brew on the Mac:

$ sudo easy_install pip
<A long output follows>

Once you have installed pip, install the paramiko, PyYAML, jinja2, and httplib2 packages using the following command lines:

$ sudo pip install paramiko PyYAML jinja2 httplib2
Requirement already satisfied (use --upgrade to upgrade): paramiko in /usr/lib/python2.6/site-packages
Requirement already satisfied (use --upgrade to upgrade): PyYAML in /usr/lib64/python2.6/site-packages
Requirement already satisfied (use --upgrade to upgrade): jinja2 in /usr/lib/python2.6/site-packages
Requirement already satisfied (use --upgrade to upgrade): httplib2 in /usr/lib/python2.6/site-packages
Downloading/unpacking markupsafe (from jinja2)
  Downloading MarkupSafe-0.23.tar.gz
  Running setup.py (path:/tmp/pip_build_root/markupsafe/setup.py) egg_info for package markupsafe
Installing collected packages: markupsafe
  Running setup.py install for markupsafe
    building 'markupsafe._speedups' extension
    gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python2.6 -c markupsafe/_speedups.c -o build/temp.linux-x86_64-2.6/markupsafe/_speedups.o
    gcc -pthread -shared build/temp.linux-x86_64-2.6/markupsafe/_speedups.o -L/usr/lib64 -lpython2.6 -o build/lib.linux-x86_64-2.6/markupsafe/_speedups.so
Successfully installed markupsafe
Cleaning up...

Note

By default, Ansible will be running against the development branch. You might want to check out the latest stable branch. Check what the latest stable version is using the following command line:

$ git branch -a

Copy the latest version you want to use. Version 2.0.2 was the latest version available at the time of writing. Check the latest version using the following command lines:

[node ansible]$ git checkout v2.0.2
Note: checking out 'v2.0.2'.
[node ansible]$ ansible --version
ansible 2.0.2 (v2.0.2 268e72318f) last updated 2014/09/28 21:27:25 (GMT +000)

You now have a working setup of Ansible ready. One of the benefits of running Ansible from source is that you can enjoy the new features immediately, without waiting for your package manager to make them available for you.

Creating a test environment with QEMU and KVM

To be able to learn Ansible, we will need to make quite a few playbooks and run them.

Tip

Doing it directly on your computer will be very risky. For this reason, I would suggest using virtual machines.

It's possible to create a test environment with cloud providers in a few seconds, but often it is more useful to have those machines locally. To do so, we will use Kernel-based Virtual Machine (KVM) with Quick Emulator (QEMU).

The first thing will be installing qemu-kvm and virt-install. On Fedora it will be enough to run:

$ sudo dnf install -y @virtualization

On Red Hat/CentOS/Scientific Linux/Unbreakable Linux it will be enough to run:

$ sudo yum install -y qemu-kvm virt-install virt-manager

If you use Ubuntu, you can install it using:

$ sudo apt install virt-manager

On Debian, you'll need to execute:

$ sudo apt install qemu-kvm libvirt-bin

For our examples, I'll be using CentOS 7. This is for multiple reasons; the main ones are:

  • CentOS is free and 100% compatible with Red Hat, Scientific Linux, and Unbreakable Linux
  • Many companies use Red Hat/CentOS/Scientific Linux/Unbreakable Linux for their servers
  • Those distributions are the only ones with SELinux support built in, and as we have seen earlier, SELinux can help you make your environment much more secure

At the time of writing this book, the most recent CentOS cloud image is http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1603.qcow2, So let's download this image with the help of the following command:

$ wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1603.qcow2

Since we will probably need to create many machines, it's better if we create a copy of it so the original one will not be modified:

$ cp CentOS-7-x86_64-GenericCloud-1603.qcow2 centos_1.qcow2

Since the qcow2 images will run cloud-init to set up the networking, users, and so on, we will need to provide a couple of files. Let's start by creating a metadata file for networking:

instance-id: centos_1 
local-hostname: centos_1.local 
network-interfaces: | 
  iface eth0 inet static 
  address (An IP in your virtual bridge class) 
  network (The first IP of the virtual bridge class) 
  netmask (Your virtual bridge class netmask) 
  broadcast (Your virtual bridge class broadcast) 
  gateway (Your virtual bridge class gateway) 

To find your virtual bridge data, you have to look for a device that has the name virbrX or something similar, in my case it is virtbr0, so I can find all of its information using the following command:

$ ip addr show virbr0

The previous command will give this as an output:

5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:38:1a:e6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0
       valid_lft forever preferred_lft forever

So, for me the meta-data file looks like the following:

instance-id: centos_1 
local-hostname: centos_1.local 
network-interfaces: | 
  iface eth0 inet static 
  address 192.168.124.10 
  network 192.168.124.1 
  netmask 255.255.255.0 
  broadcast 192.168.124.255 
  gateway 192.168.124.1 

This file will set up the eth0 interface of the virtual machine at boot time. We also need another file (user-data) to set up the users properly:

users: 
- name: (yourname) 
  shell: /bin/bash 
  sudo: ['ALL=(ALL) NOPASSWD:ALL'] 
  ssh-authorized-keys: 
  - (insert ssh public key here) 

For me, the file looks like the following:

users: 
- name: fale 
  shell: /bin/bash 
  sudo: ['ALL=(ALL) NOPASSWD:ALL'] 
  ssh-authorized-keys: 
  - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDRoZzfNif+wXFqzsmvHg4jJt8+ZO/dQxm5k7pXYAwdWVbiFrZYGhMQl5FPfzC7rkDaC31fod3Y85QkQVgNKCVYUy5QR5LfxUjSQDv+y2Nfao4be/BKla0ffc7JVSzFFAELGGDLn1lMN0e0D9syqQbKgSRdOdvweq/0Et3KNIF9e7XgEdSuAHls17NDtMkWUfyi5yvEtdtMcp9gO4OlG6Vh0iCXOdx+f0QA2hh1JnvePvzJ4a8CeckN5JwL7Q027nlsHPBYq9K1jvv+diUs48FflPJI4fgMq3Zo7zyCpf8qE7Dlx+u7OvR5kxNdrpnOsDgHeAGNkrzfcmxU7kbU29NX4VFgWd0sdlzu1nOWFEH7Cnd547tx5VFxBzJwEAUCh7QSiU2Ne/hCnjFkZuDZ5pN4pNw+yu+Feoz79gV/utoLHuCodYyAvSQlQ7VSfC+djLD/9wHC2yGksvc9ICnSUv3JyQEEEG4K26z6szF9+a3vU0qIq7YYa8QHgWIHtzSxztYRIWJOzTZlwyuNmhbRNYDaMC5BMzvQ8JREv0obMLmrlvolJPWT4gn1N9sDNNXIC6RDRE5yGsIEf0CliYW1X/8XG40U+g9LG+lrYOGWD4OymZ2P/VDIzZbVT6NG/rdSSGnf4D1AwlOGR7eNTv30AK9o0LVjqGaJWKWYUF9zY6I3+Q== 

To provide those files at boot time, we will need to create an ISO file containing them:

$ genisoimage -output centos_1.iso -volid cidata -joliet -rock user-data meta-data

After the ISO file is ready, we can instruct virt-install to actually create the virtual machine:

virt-install --name CentOS_1 \ 
--ram 2048 \ 
--disk centos_1.qcow2 \ 
--vcpus 2 \ 
--os-variant fedora21 \ 
--connect qemu:///system \ 
--network bridge:br0,model=virtio \ 
--cdrom centos_1.iso \ 
--boot hd 
virt-install --name CentOS_1 \ --ram 2048 \ --disk centos_1.qcow2 \ --vcpus 2 \ --os-variant fedora21 \ --connect qemu:///system \ --network bridge:br0,model=virtio \ --cdrom centos_1.iso \ --boot hd 

Since our network configuration is in the ISO file, we will need it at every boot. Sadly, by default this does not happen, so we will need to do a few more steps. Firstly, run virsh:

$ virsh

At this point, a virsh shell should appear with an output like the following:

Welcome to virsh, the virtualization interactive terminal.
Type:  'help' for help with commands
       'quit' to quit
virsh #

This means that we switched from bash (or your shell, if you are not using bash) to the virtualization shell. Issue the following command:

virsh # edit CentOS_1

By doing this we will be able to tweak the configuration of the CentOS_1 machine. In the disk section, you'll need to find the cdrom device that should look like this:

    <disk type='block' device='cdrom'> 
      <driver name='qemu' type='raw'/> 
      <target dev='hda' bus='ide'/> 
      <readonly/> 
      <address type='drive' controller='0' bus='0' target='0'
      unit='0'/> 
    </disk> 

You'll need to change it to the following as highlighted in bold:

    <disk type='file' device='cdrom'> 
      <driver name='qemu' type='raw'/> 
        <source file='(Put here your ISO path)/centos_1.iso'/> 
      <target dev='hda' bus='ide'/> 
      <readonly/> 
      <address type='drive' controller='0' bus='0' target='0'
      unit='0'/> 
    </disk> 

At this point, our virtual machine will always start with the ISO file mounted as a cdrom and therefore cloud-init will be able to correctly initiate the networking.

Version control system

In this chapter, we have already encountered the expression infrastructure code to describe the Ansible code that will create and maintain your infrastructure. We use the expression infrastructure code to distinguish it from the application code, which is the code that composes your applications, websites, and so on. This distinction is needed for clarity, but in the end, both types are a bunch of text files that the software will be able to read and interpret.

For this reason, a version control system will help you a lot. Its main advantages are:

  • Ability to have multiple people working simultaneously on the same project.
  • Ability to perform code reviews in a simple way.
  • Ability to have multiple branches for multiple environments (that is, dev, test, qa, staging, and production).
  • Ability to track a change so we know when it was introduced, and who introduced it. This makes it easier to understand why that piece of code is there, years (or months) later.

Those advantages are provided to you by the majority of version control systems out there.

Version control systems can be divided into three major groups based on the three different models that they can implement:

  • Local data model
  • Client-server model
  • Distributed model

The first category, the local data model, is the oldest (circa 1972) approach and is used for very specific use cases. This model requires all users to share the same filesystem. Famous examples of it are the Revision Control System (RCS) and Source Code Control System (SCCS).

The second category, the client-server model, arrived later (circa 1990) and tried to solve the limitations of the local data model, creating a server that respected the local data model and a set of clients that dealt with the server instead of with the repository itself. This additional layer allowed multiple developers to use local files and synchronize them with a centralized server. Famous examples of this approach are Apache Subversion (SVN), and Concurrent Versions System (CVS).

The third category, the distributed model, arrived at the beginning of the twenty-first century and tried to solve the limitations of the client-server model. In fact, in the client-server mode, you could work on the code offline, but you needed to be online to commit the changes. The distributed model allows you to handle everything on your local repository (like the local data model), and to merge different repositories on different machines in an easy way. In this new model, it's possible to perform all actions as in the client-server model, with the added benefits of being able to work completely offline as well as the ability to merge changes between peers without passing by the centralized server. Examples of this model are BitKeeper (proprietary software), Git, GNU Bazaar, and Mercurial.

There are some additional advantages that will be provided by only the distributed model, such as:

  • Possibility of making commits, browsing history, and performing any other action even if the server is not available
  • Easier management of multiple branches for different environments

When it comes to infrastructure code, we have to consider that, frequently, the infrastructure that retains and manages your infrastructure code is kept in the infrastructure code itself. This is a recursive situation that can create problems. In fact, until you have your code server in place you cannot deploy your Ansible, and until you have your Ansible in place, you cannot deploy your code server. A distributed version control system will prevent this problem.

As for the simplicity of managing multiple branches, even if this is not a hard rule, often distributed version control systems have much better merge handling than the other kinds of version control systems.

Using Ansible with Git

For the reasons that we have just seen and because of its huge popularity, I suggest always using Git for your Ansible repositories.

There are a few suggestions that I always provide to the people I talk to, so Ansible gets the best out of Git:

  • Create environment branches: Creating environment branches such as dev, prod, test, and stg, will allow you to easily keep track of the different environments and their respective update statuses. I often suggest keeping the master branch for the development environment, since I find many people are used to pushing new changes directly to the master. If you use a master for a production environment, people can inadvertently push changes in the production environment while they wanted to push them in a development environment.
  • Always keep environment branches stable: One of the big advantages of having environment branches is the possibility of destroying and recreating any environment from scratch at any given moment. This is only possible if your environment branches are in a stable (not broken) state.
  • Use feature branches: Using different branches for specific long-development features (such as a refactor or some other big changes) will allow you to keep your day-to-day operations while your new feature is in the Git repository (so you'll not lose track of who did what and when they did it).
  • Push often: I always suggest that people push commits as often as possible. This will make Git work as both a version control system and a backup system. I have seen laptops broken, lost, or stolen with days or weeks of unpushed work on them far too often. Don't waste your time, push often. Also, by pushing often, you'll detect merge conflicts sooner, and conflicts are always easier to handle when they are detected early, instead of waiting for multiple changes.
  • Always deploy after you have made a change: I have seen times when a developer has created a change in the infrastructure code, tested in the dev and test environments, pushed to the production branch, and then went to have lunch before deploying the changes in production. His lunch did not end well. One of his colleagues deployed the code to production inadvertently (he was trying to deploy a small change he had made in the meantime) and was not prepared to handle the other developer's deployment. The production infrastructure broke and they lost a lot of time figuring out how it was possible that such a small change (the one the person who made the deployment was aware of) created such a big mess.
  • Choose multiple small changes rather than a few huge changes: Making small changes, whenever possible, will make debugging easier. Debugging an infrastructure is not very easy. There is no compiler that will allow you to see obvious problems (even though Ansible performs a syntax check of your code, no other test is performed), and the tools for finding something that is broken are not always as good as you would imagine. The infrastructure as a code paradigm is new and tools are not yet as good as the ones for the application code.
  • Avoid binary files as much as possible: I always suggest keeping your binaries outside your Git repository, whether it is an application code repository or an infrastructure code repository. In the application code example, I think it is important to keep your repository light (Git as well as the majority of the version control systems, do not perform very well with binary blobs), while for the infrastructure code example, it is vital because you'll be tempted to put a huge number of binary blobs in it, since very often it is easier to put a binary blob in the repository than to find a cleaner (and better) solution.

Summary

In this chapter, we have seen what IT automation is, it's advantages, disadvantages, what kind of tools you can find, and how Ansible fits into this big picture. We have also seen how to install Ansible and how to create a KVM-based virtual machine. In the end, we analyzed the version control systems and spoke about the advantages Git brings to Ansible if used properly.

In the next chapter, we will start looking at the infrastructure code that we mentioned in this chapter without explaining exactly what it is and how to write it. Also in the next chapter, we'll see how to automate simple operations that you probably perform every single day, such as managing users, managing files, and file content.

Left arrow icon Right arrow icon

Key benefits

  • Simplify the automation of applications and systems using the newest version of Ansible
  • Get acquainted with fundamentals of Ansible such as playbooks, modules, and various testing strategies
  • A comprehensive, learning guide that provides you with great skills to automate your organization’s infrastructure using Ansible 2

Description

Ansible is an open source automation platform that assists organizations with tasks such as configuration management, application deployment, orchestration, and task automation. With Ansible, even complex tasks can be handled easier than before. In this book, you will learn about the fundamentals and practical aspects of Ansible 2 by diving deeply into topics such as installation (Linux, BSD, and Windows Support), playbooks, modules, various testing strategies, provisioning, deployment, and orchestration. In this book, you will get accustomed with the new features of Ansible 2 such as cleaner architecture, task blocks, playbook parsing, new execution strategy plugins, and modules. You will also learn how to integrate Ansible with cloud platforms such as AWS. The book ends with the enterprise versions of Ansible, Ansible Tower and Ansible Galaxy, where you will learn to interact Ansible with different OSes to speed up your work to previously unseen levels By the end of the book, you’ll able to leverage the Ansible parameters to create expeditious tasks for your organization by implementing the Ansible 2 techniques and paradigms.

Who is this book for?

The book is for sys admins who want to automate their organization’s infrastructure using Ansible 2. No prior knowledge of Ansible is required.

What you will learn

  • Set up Ansible 2 and an Ansible 2 project in a future-proof way
  • Perform basic operations with Ansible 2 such as creating, copying, moving, changing, and deleting files, and creating and deleting users
  • Deploy complete cloud environments using Ansible 2 on AWS and DigitalOcean
  • Explore complex operations with Ansible 2 (Ansible vault, e-mails, and Nagios)
  • Develop and test Ansible playbooks
  • Write a custom module and test it
Estimated delivery fee Deliver to Italy

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Nov 21, 2016
Length: 266 pages
Edition : 2nd
Language : English
ISBN-13 : 9781786464231
Vendor :
Red Hat
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Italy

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Publication date : Nov 21, 2016
Length: 266 pages
Edition : 2nd
Language : English
ISBN-13 : 9781786464231
Vendor :
Red Hat
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 110.97
Learning Ansible 2
€36.99
Implementing DevOps with Ansible 2
€36.99
Mastering Ansible, Second Edition
€36.99
Total 110.97 Stars icon
Banner background image

Table of Contents

10 Chapters
1. Getting Started with Ansible Chevron down icon Chevron up icon
2. Automating Simple Tasks Chevron down icon Chevron up icon
3. Scaling to Multiple Hosts Chevron down icon Chevron up icon
4. Handling Complex Deployment Chevron down icon Chevron up icon
5. Going Cloud Chevron down icon Chevron up icon
6. Getting Notifications from Ansible Chevron down icon Chevron up icon
7. Creating a Custom Module Chevron down icon Chevron up icon
8. Debugging and Error Handling Chevron down icon Chevron up icon
9. Complex Environments Chevron down icon Chevron up icon
10. Introducing Ansible for Enterprises Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
(1 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 100%
1 star 0%
Cooper Brevard May 11, 2017
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
To be honest, I didn't proceed beyond the first few sections, which dealt with setting up a local, virtual, environment using kvm and qemu, generating bootable images for VMs. Then he goes on to extol the virtues of version control and git. I wanted to learn Ansible! Not how to generate iso files for a kvm environment. Not re-learn git. In all, it's 45 pages before the author has the reader invoke ansible (aside from testing installation).
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela