Conquer by dividing
Depending on the size of your deployment and the way you connect to all your nodes, a masterless solution may be a good fit. In a masterless configuration, you don't run the Puppet agent; rather, you push Puppet code to a node and then run the puppet apply
command. There are a few benefits to this method and a few drawbacks, as stated in the following table:
Benefits |
Drawbacks |
---|---|
No single point of failure |
Can't use built-in reporting tools, such as dashboard |
Simpler configuration |
Exported resources require nodes having write access to the database. |
Finer-grained control on where the code is deployed |
Each node has access to all the code |
Multiple simultaneous runs do not affect each other (reduces contention) |
More difficult to know when a node is failing to apply a catalog correctly |
Connection to Puppet master not required (offline possible) |
No certificate management |
No certificate management | Â |
The idea with a masterless configuration is that you distribute Puppet code to each node individually and then kick off a Puppet run to apply that code. One of the benefits of Puppet is that it keeps your system in a good state; so when choosing masterless, it is important to build your solution with this in mind. A cron job configured by your deployment mechanism that can apply Puppet to the node on a routine schedule will suffice.
The key parts of a masterless configuration are: distributing the code, pushing updates to the code, and ensuring that the code is applied routinely to the nodes. Pushing a bunch of files to a machine is best done with some sort of package management.
Many masterless configurations use Git to have clients pull the files, this has the advantage of clients pulling changes. For Linux systems, the big players are rpm and dpkg, whereas for Mac OS, installer package files can be used. It is also possible to configure the nodes to download the code themselves from a web location. Some large installations use Git to update the code, as well.
The solution I will outline is that of using an rpm deployed through yum to install and run Puppet on a node. Once deployed, we can have the nodes pull updated code from a central repository rather than rebuild the rpm for every change.
Creating an rpm
To start our rpm, we will make an rpm spec file. We can make this anywhere since we don't have a master in this example. Start by installing rpm-build
, which will allow us to build the rpm.
# yum install rpm-build Installing rpm-build-4.8.0-37.el6.x86_64
Later, it is important to have a user to manage the repository, so create a user called builder
at this point. We'll do this on the Puppet master machine we built earlier. Create an rpmbuild
directory with the appropriate subdirectories and then create our example code in this location:
# sudo -iu builder $ mkdir -p rpmbuild/{SPECS,SOURCES} $ cd SOURCES $ mkdir -p modules/example/manifests $ cat <<EOF> modules/example/manifests/init.pp class example { notify {"This is an example.": } file {'/tmp/example': mode => '0644', owner => '0', group => '0', content => 'This is also an example.' } } EOF $ tar cjf example.com-puppet-1.0.tar.bz2 modules
Next, create a spec file for our rpm in rpmbuild/SPECS
as shown here:
Name: example.com-puppet Version: 1.0 Release: 1%{?dist} Summary: Puppet Apply for example.com Group: System/Utilities License: GNU Source0: example.com-puppet-%{version}.tar.bz2 BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX) Requires: puppet BuildArch: noarch %description This package installs example.com's puppet configuration and applies that configuration on the machine. %prep %setup -q -c %install mkdir -p $RPM_BUILD_ROOT/%{_localstatedir}/local/puppet cp -a . $RPM_BUILD_ROOT/%{_localstatedir}/local/puppet %clean rm -rf %{buildroot} %files %defattr(-,root,root,-) %{_localstatedir}/local/puppet %post # run puppet apply /bin/env puppet apply --logdest syslog --modulepath=%{_localstatedir}/local/puppet/modules %{_localstatedir}/local/puppet/manifests/site.pp %changelog * Fri Dec 6 2013 Thomas Uphill <[email protected]> - 1.0-1 - initial build
Then use the rpmbuild
command to build the rpm based on this spec, as shown here:
$ rpmbuild -baexample.com-puppet.spec … Wrote: /home/builder/rpmbuild/SRPMS/example.com-puppet-1.0-1.el6.src.rpm Wrote: /home/builder/rpmbuild/RPMS/noarch/example.com-puppet-1.0-1.el6.noarch.rpm
Now, deploy a node and copy the rpm onto that node. Verify that the node installs Puppet and then does a Puppet apply run.
# yum install example.com-puppet-1.0-1.el6.noarch.rpm Loaded plugins: downloadonly … Installed: example.com-puppet.noarch 0:1.0-1.el6 Dependency Installed: augeas-libs.x86_64 0:1.0.0-5.el6 ... puppet-3.3.2-1.el6.noarch … Complete!
Verify that the file we specified in our package has been created using the following command:
# cat /tmp/example This is also an example.
Now, if we are going to rely on this system of pushing Puppet to nodes, we have to make sure that we can update the rpm on the clients and we have to ensure that the nodes still run Puppet regularly, so as to avoid configuration drift (the whole point of Puppet).
Using Puppet resource to configure cron
There are many ways to accomplish these two tasks. We can put the cron definition into the post section of our rpm, as follows:
%post # install cron job /bin/env puppet resource cron 'example.com-puppet' command='/bin/env puppet apply --logdest syslog --modulepath=%{_localstatedir}/local/puppet/modules %{_localstatedir}/local/puppet/manifests/site.pp' minute='*/30' ensure='present'
We can have a cron job be part of our site.pp
, as shown here:
cron { 'example.com-puppet': ensure => 'present', command => '/bin/env puppet apply --logdest syslog --modulepath=/var/local/puppet/modules /var/local/puppet/manifests/site.pp', minute => ['*/30'], target => 'root', user => 'root', }
To ensure that the nodes have the latest versions of the code, we can define our package in site.pp
:
package {'example.com-puppet': ensure => 'latest' }
In order for that to work as expected, we need to have a yum repository for the package and have the nodes looking at that repository for packages.
Creating the yum repository
Creating a yum repository is a very straightforward task. Install the createrepo
rpm and then run createrepo
on each directory you wish to make into a repository:
# mkdir /var/www/html/puppet # yum install createrepo … Installed: createrepo.noarch 0:0.9.9-18.el6 # chown builder /var/www/html/puppet # sudo -iu builder $ mkdir /var/www/html/puppet/{noarch,SRPMS} $ cp /home/builder/rpmbuild/RPMS/noarch/example.com-puppet-1.0-1.el6.noarch.rpm /var/www/html/puppet/noarch $ cp rpmbuild/SRPMS/example.com-puppet-1.0-1.el6.src.rpm /var/www/html/puppet/SRPMS $ cd /var/www/html/puppet $ createrepo noarch $ createrepo SRPMS
Our repository is ready, but we need to export it with the web server to make it available to our nodes. This rpm contains all our Puppet code, so we need to ensure that only the clients we wish get an access to the files. We'll create a simple listener on port 80
for our Puppet repository:
Listen 80 <VirtualHost *:80> DocumentRoot /var/www/html/puppet </VirtualHost>
Now, the nodes need to have the repository defined on them so that they can download the updates when they are made available via the repository. The idea here is that we push the rpm to the nodes and have them install the rpm. Once the rpm is installed, the yum repository pointing to updates is defined and the nodes continue updating themselves:
yumrepo { 'example.com-puppet': baseurl => 'http://puppet.example.com/noarch', descr => 'example.com Puppet Code Repository', enabled => '1', gpgcheck => '0', }
So, to ensure that our nodes operate properly, we have to make sure of the following things:
- Install code.
- Define repository.
- Define cron job to run Puppet apply routinely.
- Define package with latest tag to ensure it is updated.
A default node in our masterless configuration requires that the cron task and the repository be defined. If you wish to segregate your nodes into different production zones (such as development, production, and sandbox), I would use a repository management system, such as Pulp. Pulp allows you to define repositories based on other repositories and keeps all your repositories consistent.
Note
You should also set up a gpg
key on the builder
account that can sign the packages it creates. You will then distribute the gpg
public key to all your nodes and enable gpgcheck
on the repository definition.
Tip
Downloading the example code
You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
You can download the code files by following these steps:
1. Log in or register to our website using your e-mail address and password.
2. Hover the mouse pointer on the SUPPORT tab at the top.
3. Click on Code Downloads & Errata.
4. Enter the name of the book in the Search box.
5. Select the book for which you're looking to download the code files.
6. Choose from the drop-down menu where you purchased this book from.
7. Click on Code Download.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
- WinRAR / 7-Zip for Windows
- Zipeg / iZip / UnRarX for Mac
- 7-Zip / PeaZip for Linux