Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Practical Ansible 2

You're reading from   Practical Ansible 2 Automate infrastructure, manage configuration, and deploy applications with Ansible 2.9

Arrow left icon
Product type Paperback
Published in Jun 2020
Publisher Packt
ISBN-13 9781789807462
Length 410 pages
Edition 1st Edition
Tools
Arrow right icon
Authors (4):
Arrow left icon
Fabio Alessandro Locati Fabio Alessandro Locati
Author Profile Icon Fabio Alessandro Locati
Fabio Alessandro Locati
James Freeman James Freeman
Author Profile Icon James Freeman
James Freeman
Daniel Oh Daniel Oh
Author Profile Icon Daniel Oh
Daniel Oh
Oh Se Young Oh Se Young
Author Profile Icon Oh Se Young
Oh Se Young
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Section 1: Learning the Fundamentals of Ansible
2. Getting Started with Ansible FREE CHAPTER 3. Understanding the Fundamentals of Ansible 4. Defining Your Inventory 5. Playbooks and Roles 6. Section 2: Expanding the Capabilities of Ansible
7. Consuming and Creating Modules 8. Consuming and Creating Plugins 9. Coding Best Practices 10. Advanced Ansible Topics 11. Section 3: Using Ansible in an Enterprise
12. Network Automation with Ansible 13. Container and Cloud Management 14. Troubleshooting and Testing Strategies 15. Getting Started with Ansible Tower 16. Assessments 17. Other Books You May Enjoy

Understanding your Ansible installation

By this stage in this chapter, regardless of your operating system choice for your Ansible control machine, you should have a working installation of Ansible with which to begin exploring the world of automation. In this section, we will carry out a practical exploration of the fundamentals of Ansible to help you to understand how to work with it. Once you have mastered these basic skills, you will then have the knowledge required to get the most out of the remainder of this book. Let's get started with an overview of how Ansible connects to non-Windows hosts.

Understanding how Ansible connects to hosts

With the exception of Windows hosts (as discussed at the end of the previous section), Ansible uses the SSH protocol to communicate with hosts. The reasons for this choice in the Ansible design are many, not least that just about every Linux/FreeBSD/macOS host has it built in, as do many network devices such as switches and routers. This SSH service is normally integrated with the operating system authentication stack, enabling you to take advantage of things such as Kerberos to improve authentication security. Also, features of OpenSSH such as ControlPersist are used to increase the performance of the automation tasks and SSH jump hosts for network isolation and security.

ControlPersist is enabled by default on most modern Linux distributions as part of the OpenSSH server installation. However, on some older operating systems such as Red Hat Enterprise Linux 6 (and CentOS 6), it is not supported, and so you will not be able to use it. Ansible automation is still perfectly possible, but longer playbooks might run slower.

Ansible makes use of the same authentication methods that you will already be familiar with, and SSH keys are normally the easiest way to proceed as they remove the need for users to input the authentication password every time a playbook is run. However, this is by no means mandatory, and Ansible supports password authentication through the use of the --ask-pass switch. If you are connecting to an unprivileged account on the hosts, and need to perform the Ansible equivalent of running commands under sudo, you can also add --ask-become-pass when you run your playbooks to allow this to be specified at runtime as well.

The goal of automation is to be able to run tasks securely but with the minimum of user intervention. As a result, it is highly recommended that you use SSH keys for authentication, and if you have several keys to manage, then be sure to make use of ssh-agent.

Every Ansible task, whether it is run singly or as part of a complex playbook, is run against an inventory. An inventory is, quite simply, a list of the hosts that you wish to run the automation commands against. Ansible supports a wide range of inventory formats, including the use of dynamic inventories, which can populate themselves automatically from an orchestration provider (for example, you can generate an Ansible inventory dynamically from your Amazon EC2 instances, meaning you don't have to keep up with all of the changes in your cloud infrastructure).

Dynamic inventory plugins have been written for most major cloud providers (for example, Amazon EC2, Google Cloud Platform, and Microsoft Azure), as well as on-premises systems such as OpenShift and OpenStack. There are even plugins for Docker. The beauty of open source software is that, for most of the major use cases you can dream of, someone has already contributed the code and so you don't need to figure it out or write it for yourself.

Ansible's agentless architecture and the fact that it doesn't rely on SSL means that you don't need to worry about DNS not being set up or even time skew problems as a result of NTP not workingthese can, in fact, be tasks performed by an Ansible playbook! Ansible really was designed to get your infrastructure running from a virtually bare operating system image.

For now, let's focus on the INI formatted inventory. An example is shown here with four servers, each split into two groups. Ansible commands and playbooks can be run against an entire inventory (that is, all four servers), one or more groups (for example, webservers), or even down to a single server:

[webservers]
web1.example.com
web2.example.com

[apservers]
ap1.example.com
ap2.example.com

Let's use this inventory file along with the Ansible ping module, which is used to test whether Ansible can successfully perform automation tasks on the inventory host in question. The following example assumes you have installed the inventory in the default location, which is normally /etc/ansible/hosts. When you run the following ansible command, you see a similar output to this:

$ ansible webservers -m ping 
web1.example.com | SUCCESS => {
"changed": false,
"ping": "pong"
}
web2.example.com | SUCCESS => {
"changed": false,
"ping": "pong"
}
$

Notice that the ping module was only run on the two hosts in the webservers group and not the entire inventorythis was by virtue of us specifying this in the command-line parameters.

The ping module is one of many thousands of modules for Ansible, all of which perform a given set of tasks (from copying files between hosts, to text substitution, to complex network device configuration). Again, as Ansible is open source software, there is a veritable army of coders out there who are writing and contributing modules, which means if you can dream of a task, there's probably already an Ansible module for it. Even in the instance that no module exists, Ansible supports sending raw shell commands (or PowerShell commands for Windows hosts) and so even in this instance, you can complete your desired tasks without having to move away from Ansible.

As long as the Ansible control host can communicate with the hosts in your inventory, you can automate your tasks. However, it is worth giving some consideration to where you place your control host. For example, if you are working exclusively with a set of Amazon EC2 machines, it arguably would make more sense for your Ansible control machine to be an EC2 instance—in this way, you are not sending all of your automation commands over the internet. It also means that you don't need to expose the SSH port of your EC2 hosts to the internet, hence keeping them more secure.

We have so far covered a brief explanation of how Ansible communicates with its target hosts, including what inventories are and the importance of SSH communication to all except Windows hosts. In the next section, we will build on this by looking in greater detail at how to verify your Ansible installation.

Verifying the Ansible installation

In this section, you will learn how you can verify your Ansible installation with simple ad hoc commands.

As discussed previously, Ansible can authenticate with your target hosts several ways. In this section, we will assume you want to make use of SSH keys, and that you have already generated your public and private key pair and applied your public key to all of your target hosts that you will be automating tasks on.

The ssh-copy-id utility is incredibly useful for distributing your public SSH key to your target hosts before you proceed any further. An example command might be ssh-copy-id -i ~/.ssh/id_rsa [email protected].

To ensure Ansible can authenticate with your private key, you could make use of ssh-agentthe commands show a simple example of how to start ssh-agent and add your private key to it. Naturally, you should replace the path with that to your own private key:

$ ssh-agent bash 
$ ssh-add ~/.ssh/id_rsa

As we discussed in the previous section, we must also define an inventory for Ansible to run against. Another simple example is shown here:

[frontends]
frt01.example.com
frt02.example.com

The ansible command that we used in the previous section has two important switches that you will almost always use: -m <MODULE_NAME> to run a module on the hosts from your inventory that you specify and, optionally, the module arguments passed using the -a OPT_ARGS switch. Commands run using the ansible binary are known as ad hoc commands.

Following are three simple examples that demonstrate ad hoc commands—they are also valuable for verifying both the installation of Ansible on your control machine and the configuration of your target hosts, and they will return an error if there is an issue with any part of the configuration:

  • Ping hosts: You can perform an Ansible "ping" on your inventory hosts using the following command:
$ ansible frontends -i hosts -m ping
  • Display gathered facts: You can display gathered facts about your inventory hosts using the following command:
$ ansible frontends -i hosts -m setup | less
  • Filter gathered facts: You can filter gathered facts using the following command:
$ ansible frontends -i hosts -m setup -a "filter=ansible_distribution*"

For every ad hoc command you run, you will get a response in JSON format—the following example output results from running the ping module successfully:

$ ansible frontends -m ping 
frontend01.example.com | SUCCESS => {
"changed": false,
"ping": "pong"
}
frontend02.example.com | SUCCESS => {
"changed": false,
"ping": "pong"
}

Ansible can also gather and return "facts" about your target hostsfacts are all manner of useful information about your hosts, from CPU and memory configuration to network parameters, to disk geometry. These facts are intended to enable you to write intelligent playbooks that perform conditional actionsfor example, you might only want to install a given software package on hosts with more than 4 GB of RAM or perhaps perform a specific configuration only on macOS hosts. The following is an example of the filtered facts from a macOS-based host:

$ ansible frontend01.example.com -m setup -a "filter=ansible_distribution*"
frontend01.example.com | SUCCESS => {
ansible_facts": {
"ansible_distribution": "macOS",
"ansible_distribution_major_version": "10",
"ansible_distribution_release": "18.5.0",
"ansible_distribution_version": "10.14.4"
},
"changed": false

Ad hoc commands are incredibly powerful, both for verifying your Ansible installation and for learning Ansible and how to work with modules as you don't need to write a whole playbookyou can just run a module with an ad hoc command and learn how it responds. Here are some more ad hoc examples for you to consider:

  • Copy a file from the Ansible control host to all hosts in the frontends group with the following command:
$ ansible frontends -m copy -a "src=/etc/yum.conf dest=/tmp/yum.conf"
  • Create a new directory on all hosts in the frontends inventory group, and create it with specific ownership and permissions:
$ ansible frontends -m file -a "dest=/path/user1/new mode=777 owner=user1 group=user1 state=directory" 
  • Delete a specific directory from all hosts in the frontends group with the following command:
$ ansible frontends -m file -a "dest=/path/user1/new state=absent"
  • Install the httpd package with yum if it is not already presentif it is present, do not update it. Again, this applies to all hosts in the frontends inventory group:
$ ansible frontends -m yum -a "name=httpd state=present"
  • The following command is similar to the previous one, except that changing state=present to state=latest causes Ansible to install the (latest version of the) package if it is not present, and update it to the latest version if it is present:
$ ansible frontends -m yum -a "name=demo-tomcat-1 state=latest" 
  • Display all facts about all the hosts in your inventory (warningthis will produce a lot of JSON!):
$ ansible all -m setup 

Now that you have learned more about verifying your Ansible installation and about how to run ad hoc commands, let's proceed to look in a bit more detail at the requirements of the nodes that are to be managed by Ansible.

Managed node requirements

So far, we have focused almost exclusively on the requirements for the Ansible control host and have assumed that (except for the distribution of the SSH keys) the target hosts will just work. This, of course, is not always the case, and for example, while a modern installation of Linux installed from an ISO will often just work, cloud operating system images are often stripped down to keep them small, and so might lack important packages such as Python, without which Ansible cannot operate.

If your target hosts are lacking Python, it is usually easy to install it through your operating system's package management system. Ansible requires you to install either Python version 2.7 or 3.5 (and above) on both the Ansible control machine (as we covered earlier in this chapter) and on every managed node. Again, the exception here is Windows, which relies on PowerShell instead.

If you are working with operating system images that lack Python, the following commands provide a quick guide to getting Python installed:

  • To install Python using yum (on older releases of Fedora and CentOS/RHEL 7 and below), use the following:
$ sudo yum -y install python
  • On RHEL and CentOS version 8 and newer versions of Fedora, you would use the dnf package manager instead:
$ sudo dnf install python

You might also elect to install a specific version to suit your needs, as in this example:

$ sudo dnf install python37
  • On Debian and Ubuntu systems, you would use the apt package manager to install Python, again specifying a version if required (the example given here is to install Python 3.6 and would work on Ubuntu 18.04):
$ sudo apt-get update
$ sudo apt-get install python3.6

The ping module we discussed earlier in this chapter for Ansible not only checks connectivity and authentication with your managed hosts, but it uses the managed hosts' Python environment to perform some basic host checks. As a result, it is a fantastic end-to-end test to give you confidence that your managed hosts are configured correctly as hosts, with the connectivity and authentication set up perfectly, but where Python is missing, it would return a failed result.

Of course, a perfect question at this stage would be: how can Ansible help if you roll out 100 cloud servers using a stripped-down base image without Python? Does that mean you have to manually go through all 100 nodes and install Python by hand before you can start automating?

Thankfully, Ansible has you covered even in this case, thanks to the raw module. This module is used to send raw shell commands to the managed nodesand it works both with SSH-managed hosts and Windows PowerShell-managed hosts. As a result, you can use Ansible to install Python on a whole set of systems from which it is missing, or even run an entire shell script to bootstrap a managed node. Most importantly, the raw module is one of very few that does not require Python to be installed on the managed node, so it is perfect for our use case where we must roll out Python to enable further automation.

The following are some examples of tasks in an Ansible playbook that you might use to bootstrap a managed node and prepare it for Ansible management:

- name: Bootstrap a host without python2 installed
  raw: dnf install -y python2 python2-dnf libselinux-python

- name: Run a command that uses non-posix shell-isms (in this example /bin/sh doesn't handle redirection and wildcards together but bash does)
  raw: cat < /tmp/*txt
  args:
    executable: /bin/bash

- name: safely use templated variables. Always use quote filter to avoid injection issues.
  raw: "{{package_mgr|quote}} {{pkg_flags|quote}} install {{python|quote}}"

We have now covered the basics of setting up Ansible both on the control host and on the managed nodes, and we have given you a brief primer on configuring your first connections. Before we wrap up this chapter, we will look in more detail at how you might run the latest development version of Ansible, direct from GitHub.

You have been reading a chapter from
Practical Ansible 2
Published in: Jun 2020
Publisher: Packt
ISBN-13: 9781789807462
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image