Configuration Management Series Part 3 of 3: Puppet

At the start of 2013, the MSP that I was working for at the time landed its largest client in the history of the company. The issue: this meant that I needed to set up approximately 40 servers in a relatively small period of time. This aside, the prospect of manually installing and configuring 40 physical nodes did not entertain me at all. Network installs with Kickstart were fine, sure, but I needed more this time.

I had been entertaining the thought of implementing some sort of configuration management for our systems for a while. I had always hit a wall as to exactly what to use it for. Implementing it for the simple management of just our SSH keys or monitoring system configs seemed like a bit of a waste. I had lightly touched Chef before, but as mentioned in the last article, did not have much success in getting it set up fully, and did not have much time to pursue it further at that point.

I now had a great opportunity on my hands. I decided to take another look at Puppet, which I had looked at only briefly, and found that it seemed to have a robust community version, a simple quickstart workflow, and quite functional Windows support. Call it a matter of circumstance, but it left a much better impression on me than Chef did.

I set up a Puppet master, set up our Kickstart server to perform the installs and bootstrap Puppet in the post-install, and never looked back.

For this client, I set up Puppet to manage several things, such as SSH keys, monitoring system agents (Nagios and Munin), and even network settings, with excellent results. I went on to implement it for the rest of our network and administration became magnitudes easier.

Since then I have used Puppet to manage web servers, set up databases, manage routing tables, push out new monitoring system agents and configure them, and even implement custom backup servers using rsnapshot. One of my crowning achievements is using Puppet to fully implement the setup of private VPN software appliances, including OpenVPN instances complete with custom Windows installers.

Ladies and gentlemen, I present to you: Puppet.


I am used to this process by now, but I’d probably have to say that standing up a Puppet master is still an extremely easy process. There are plenty of options available – if it’s not possible to use the official repository (although it is recommended), most modern distros do carry relatively up to date versions of the Puppet master and agent (3.4.3 in Ubuntu 14.04 and 3.6.2 in CentOS 7). Installation of both the WEBrick and Passenger versions are both very straightforward and not many (if any) configuration settings need to be changed to get started.

There is also the emergent Puppet Server, which is a new Puppet master that is intended to replace the old implementations. This is a Clojure-based server (meaning it runs in a JVM), but don’t let that necessarily dissuade you. If the official apt repositories are being used, installing puppetserver well install everything else needed to run it, including the Java runtime.

Funny enough, in contrast to the beefy 1GB requirements of Chef, I was able to get Puppet Server up and running with a little over 300 MB of RAM used. Even though the installation instructions recommend a minimum of 512MB, I was able to run the server with JAVA_ARGS="-Xms256m -Xmx256m" (a 256MB heap size basically) and perform the testing I needed to do for this article, without any crashes.

After installing the server, things were as easy as:

  • apt-get installing the puppet agent,
  • Configuring the agent to contact the master,
  • Starting the agent (ie: service puppet start),
  • And finally, using puppet cert list and puppet cert sign to locate and sign the agent’s certificate request.

The full installation instructions can be found here:

Windows Support

As mentioned last article, one or the reasons that I actually did choose Puppet over Chef was that its Windows support seemed to be further along.

One of the things that I do inparticularly like about Puppet’s Windows support was the way they made the package resource work for Windows: it can be used to manage Windows Installer packages. This made deployment of Nagios agents and other software utilities to Windows infrastructure extremely easy.

Other than that, several of the basic Puppet resources (such as file, user, group, and service) all work on Windows.

Head on over to the Windows page for more information.


In the very basic model, Puppet agent nodes are managed through the main site manifest located in /etc/puppet/manifests/site.pp. This could look like:

node "srv2.lab.vcts.local" {
  include sample_module

Where sample_module is a module, which is a unit of actions to run against a server (just like cookbooks in Chef).

Agents connect to the Puppet master, generally over port 8140 and over an SSL connection. Nodes are authenticated via certificates which Puppet keeps in a central CA:

root@srv1:~# puppet cert list --all
+ "srv1.lab.vcts.local" (SHA256) 00:11:22... (alt names: "DNS:puppet", "DNS:srv1.lab.vcts.local")
+ "srv2.lab.vcts.local" (SHA256) 00:11:22...

This makes for extremely easy node management. All that needs to be done to authorize an agent on the master is puppet cert sign the pending request. puppet cert revoke is used to add a agent’s certificate to the CRL, removing their access from the server.

As far as data storage goes, I have already written an article on Hiera, Puppet’s data storage mechanism. Check it out here. Again, Hiera is a versatile data storage backend that respects module scope as well, making it an extremely straightforward way to store node parameter data. Better yet, encryption is supported, as are additional storage backends other than the basic JSON and YAML setups.

Execution Model

Natively, right now I would only say that Puppet natively supports a pull model. This is because its push mechanism, puppet kick, seems to be in limbo, as illustrated by the redmine and JIRA issues. The alternative is to apparently use MCollective, which I have never touched.

By default, Puppet runs every 30 minutes on nodes, and this can be tuned by playing with settings in /etc/puppet/puppet.conf.

Further to that, one-off runs of the puppet agent are easy enough, just run puppet agent -t from the command line, which will perform a single run with some other options (ie: slightly higher verbosity). This can easily be set up to run out of something like Ansible (and Ansible’s SSH keys can even be managed through Puppet!)

Puppet also supports direct, master and agentless runs of manifests thru the puppet apply method. Incidentally, this is used by some pretty well-known tools, notably packstack.


This was neat to come back to during my evaluation yesterday. The following manifest took me maybe about 5 minutes to write, and supplies a good deal of information on how the declarative nature of Puppet works. Here, srv2.lab.vcts.local is set up with a MySQL server, a database, and backups, via Puppet’s upstream-supported MySQL module.

node "srv2.lab.vcts.local" {
  class { '::mysql::server':
    root_password => 'serverpass',
  mysql::db { 'vcts':
    user => 'vcts',
    password => 'dbpass',
  file { '/data':
    ensure => directory
  class { '::mysql::server::backup':
    backupuser => 'sqlbackup',
    backuppassword => 'backuppass',
    backupdir => '/data/mysqlbackup',
    backupcompress => true,
    backuprotate => 7,
    file_per_database => true,
    time => [ '22', '00' ],

The DSL is Ruby-based, kind of like Chef, but unlike Chef, it’s not really Ruby anymore. The declarative nature of the DSL means that it’s strongly-typed. There is also no top-down workflow – dependencies need to be strung together with require directives that point to services or packages that would need to be installed. This is a strength, and a weakness, as it is possible to get caught in a complicated string of dependencies and even end up with some circular ones. But when kept simple, it’s a nice way to ensure that things get installed when you want them to be.

Templates are handled by ERB, like Chef. The templates documentation can help out here.

The coolest part about developing for Puppet though has to be its high-quality module community at Puppet Forge. I have used several modules from here over the years and all of the ones that I have used have been of excellent quality – two that come to mind are razorsedge/vmwaretools and pdxcat/nrpe. Not just this, but Puppet has an official process for module approval and support with Puppet Enterprise. And to ice the cake, Puppet Labs themselves have 91 modules on the Forge, with several being of excellent quality and documentation, as can be seen by looking at the MySQL module above. It’s this kind of commitment to professionalism that really makes me feel good about Puppet’s extensibility.


A good middle ground that has stood the test of time

Puppet is probably the first of a new wave of configuration management tools that followed things the likes of CFEngine. I really wish I knew about it when it first came out – it definitely would have helped me solve some challenges surrounding configuration management much earlier than 2013. And it’s as much as usable today, if not more so. With the backing of companies like Google, Cisco, and VMware, Puppet is not going away any time soon. If you are looking for a configuration management system that balances simplicity and utility well, then Puppet is for you. I also can’t close this off without mentioning my love for Puppet Forge, which I personally think is the best module community for its respective product, out of the three that I have reviewed.


In the circles that I’m a part of, and out of the three tools that I have reviewed over this last month, Puppet doesn’t nearly get the love that I think it should. Some people love the Ansible push model. Other developers love Chef because that’s what they are used to and what they have chosen to embrace. Puppet – for no fault of its own, to be honest – seems to be the black sheep here.

Maybe I just need to get out more. A little over two years ago, RedMonk published this article comparing usage of major configuration management tools. If you look at this, at least 2 years ago, there would be no reason to say that Puppet should be put out to pasture yet.

The End

I hope you enjoyed this last month of articles about configuration management.

I thought I’d put forward some thoughts I have had while working with and reviewing these tools for the last month. I came into this looking to do a “shootout” of sorts – basically, standing up each tool and comparing their strengths and weaknesses based off a very simple set of tests and a checklist of features.

I soon abandoned that approach. Why? First off – time. I found that had very limited time to do this, sometimes mainly a Sunday afternoon of an otherwise busy week, to set up, review, and write about each tool (and even at that the articles sometimes came out a few days late).

But more importantly for me I didn’t want to hold on to some stubborn opinion about one tool being better than the other. So I decided to throw all of that stuff out and approach it with an open mind, and actually immerse myself in the experience of using each tool.

I think that I have come out of it better, with an actually objective and educated opinion now about the environment that suits each tool best. Ultimately, this has made me a better engineer.

If you have read this far, and have read all three major articles, first off, thank you. Second off, if you are having trouble choosing between one of the three reviewed in this series, or another altogether, you might want to ask yourself the following few questions:

  • What does my team have knowledge on already?
  • Will I be able to ramp up on the solution in a timely manner with the least impact?
  • Will the solution be equally (or better supported) by future infrastructure platforms?

Most of all, keep looking forward and keep an open mind. Don’t let comfort in using one tool keep you closed off to using others.

Thanks again!