AWS World Detour – Packer and Terraform

NOTE: Click here to get to the sample project quickly.

So I know it’s been a while – quite a while – since I’ve posted anything. I’ve been far from inactive though, and I will touch up on that in a later post.

I will be getting back on track here with my AWS World Tour series soon, but first, I want to take a bit of a detour and discuss some things that have been highly relevant to what I have been working on as of the last little while – infrastructure provisioning with Terraform.

My Journey the Last 6 Months

I have had a very eventful last 6 months since I started to use AWS “in anger”. I’m not the biggest fan of that term, but I can’t really think of other way to Put it right now. There have indeed been some frustrating moments, but I can’t really say I was angry.

In any case, my journey to try and find a good AWS provisioning platform took me to a few places, namely:

  • Trying to use Chef – Which prompted me to write this pull request. However, I quickly realized that having to write AWS resources for everything that we needed to do at PayByPhone might not be the best use of my time.
  • I also was going to write my own tooling around managing CloudFormation, but stopped short when, one weekend, I asked myself if I was reinventing the wheel, and I kind of was.

Enter Terraform.


Terraform is a configuration management tool created by HashiCorp with an emphasis on virtual infrastructure, versus say something like Chef or Puppet, that has an emphasis on OS-level configuration management. In fact, the two can work in tandem – Terraform has a Chef provisioner.

What makes Terraform special is its ability to support many kinds of infrastructure platforms – and in fact, this is its stated goal. Things written in Terraform should be ultimately portable from one platform to another, say one wanted to move from AWS to Azure, or some kind of OpenStack-hosted solution somewhere. Admittedly, we had looked at Terraform earlier on in the year last year, and its AWS support had not been fully fleshed out yet, but that has seemed to have changed drastically, and by the end of last year its feature set was on par, and possibly even better, than CloudFormation itself.

So what if one just wanted to stick with AWS? What’s the point in using it over, say, simply CloudFormation? The answer to that is really dependent on the use case, and like many things in life, boil down to the little things. Some examples:

  • Support for ZIP file uploads to AWS Lambda (versus being restricted to S3 on CloudFormation).
  • Support for the AWS NAT Gateway. As of this writing, about a month since the NAT Gateway’s release, it still does not appear to be supported by
    CloudFormation, according to my very light research (judging by the traffic on this thread).
  • A non-JSON DSL that has support for several kinds of programmatic interpolation operations, including some basic forms of loop operations allowing one to create a certain amount of resources with a single chunk of code. The DSL also has support for modules, allowing re-use and distribution of common templates.

This is all possible by the fact that by using Terraform, one is not beholden to the CloudFormation way of doing things. Rather, Terraform uses the AWS API (thru the Go SDK), which in some instances is much more versatile than CloudFormation is, or CloudFormation could be, for that matter (as a hosted configuration management platform, they have to restrict some of their data sources to some reliable services – this is more than likely why S3 is the only way to get a ZIP file read in for Lambda).

Terraform is a fast moving target. I have submitted several pull requests myself, for bug fixes and adding functionality alike, and don’t see an end in sight for that, not at least for a few months. For example, one of my more recent feature PRs allows one to get details on an AMI for use in a template later. In order to do this in CloudFormation, one would have to undertake the tedious process of writing a custom resource which ultimately leads to more out-of-band resources, and unnecessary technical debt, in my opinion.


I’ve been using Packer for quite some time now to build base images, namely for our VMWare infrastructure at PayByPhone.

Packer is basically an image builder. Think of it this way – gem might build a Ruby gem to deploy on other systems, and one might build a static binary with go build. Packer is like this, but for system images. In fact, one might not have a custom application, and simply may need to build an image with some typical software on it and configured a certain way – in this instance, Packer and its provisioning code could live in the same repository and serve as a complete “application”.

Incidentally, the process for building AMIs is actually much simpler than using VMWare. There is a lot less code, and it’s basically AMI-in, AMI-out.

Packer Provisioners

Just a note on provisioners – Packer supports several of these. I use the Chef provisioner. There are also Ansible and shell provisioners. I would recommend using one of the configuration management options – it allows for the better re-use code, and also allows for testing before moving on to the Packer build. Namely, with Chef, I am able to use Berkshelf to manage dependencies, and Test Kitchen to sort out most, if not all, errors that would happen with a cookbook before moving on to creating the packer JSON file.

This also makes it more portable for use with something like Vagrant – generally all that’s needed is to copy the provisioner configuration from Vagrant to Packer, or vice versa, with possibly some minor modifications.

Putting it Together

With the above tools and a competent build runner, I can actually write an entire pipeline that takes an application and deploys it to AWS with relative ease. Better than that, this all can live together! This enables someone to go to a single repository to:

  • Get hands on the code, make changes, and run unit tests on it
  • Deploy using the provisioner code to an EC2 instance, or local to their machine with Vagrant, to run integration tests and experiment with the code
  • Fully test the infrastructure by building the AMI and deploying the infrastructure in an uniform fashion to multiple environments (ie: sandbox, staging, or production).

This allows for a pipeline that should ensure a near perfect deploy once the application is ready for production.

The Pattern in Action

As an example, I have taken the code from my previous article, (see here, here, here, and the code here), and applied the same idea, but with some changes. Again, I am deploying a VPC with ELB and 3 instances, but this time, I have skipped some of the details irrelevant to a VPC of this kind, especially since I will never be logging into it – namely the private network and NAT part of things.

The code can be found here. Let’s go over it together:

The application

The application is a simple Ruby “hello world” style application running with Sinatra. The application is bundled up as a Ruby gem – this is actually a pretty easy way to create this kind of application as it produces a single artifact that can be deployed where ever, especially on a “bare metal”, single- purpose system. This application could be even deployed even using the system’s base Ruby, if it is current enough (I don’t do that though, as I will show in later sections).

The layout of the application part of the sample repo is such:

  vancluever_hello       <-- Executable binary part of gem package
  vancluever_hello.rb    <-- Application "entry point" and Sinatra code
    version.rb           <-- Gem version file
pkg/                     <-- Output directory (gem gets built to here)
Gemfile                  <-- Bundler dependency file (chained to gemspec)
Rakefile                 <-- Rake build runner configuration file
vancluever_hello.gemspec <-- RubyGems package spec file

The bulk of the test code is in the lib/vancluever_hello.rb file, a simple file whose contents is shown below:

require 'sinatra/base'
require 'socket'

module VanclueverHello
  # Run the test server.
  class Server < Sinatra::Base
    def self.run_server
      set :bind, ''

      get '/' do
        'Hello from #{Socket.gethostname}!!!'


This is the Sinatra self-hosted version of what we were doing with the index.html files, Apache, and user data in the previous version of this stack. Rather than use a static file this time, we are using Sinatra and Ruby to demonstrate how this small app can be bundled onto an image and deployed from there, without any post-creation package installation and content writing.

The Rakefile contains the bundler/gem_tasks helper that allows us to easily build this gem from the details in the vancluever_hello.gemspec file. By running rake build, the .gem is dropped into the pkg/ directory, ready for the next step.

The packer_payload Chef cookbook

The next piece of the puzzle is the packer_payload Chef cookbook, self-contained in the cookbooks directory. This cookbook is not like other Chef cookbooks one might see – the metadata is stripped down (no version info, description, or even version locking). This is because this cookbook is not intended to be used anywhere else other than the Packer build that I will be discussing shortly.

Why Chef then? Why not shell scripts if this is all the cookbook is going to be used for? A couple of quick reasons that come to mind:

  • I’m not the biggest fan of shell scripts – I will use them when necessary, but I’m more a fan these days of writing things in a way that they can be easily re-used, and in a way that makes it easy to pull in things that make my job easier. Using Chef allows me to do that. For example, rather than having to write code to manage a non-distro Ruby, I use poise-ruby and poise-ruby-build to manage the Ruby version and gem package. Taking it further, rather than having to write scripts to template out upstart or systemd, I use poise-service, which supports both.
  • Even if this cookbook is not suitable for Supermarket, or to sit on a Chef server, it’s re-usability is not completely diminished. Test Kitchen can still be used with this, and in fact there is a file in the directory. Kitchen was used to test this cookbook before putting it into packer, ensuring that most, if not all, of the code worked before starting the process to build the AMI. This cookbook can also be used in Vagrant with minimal effort as well, should the need arise.

Getting the data to Chef

One thing that deserves mention is how I actually get the data to the Chef cookbook. The artifact does need to be delivered in some fashion to the cookbook itself. This is okay, mainly because I have Packer and Test Kitchen to help out with that.

In attributes/default.rb I control the location of the artifact:

default['packer_payload']['source_path'] = '/tmp/gem_pkg'

This is where the artifact is copied to with Packer (more on that soon). However, with Kitchen, things are a little different, because of how the data directory stuff works:

  name: chef_zero
  data_path: ../../pkg/

data_path controls the directory that contains any non-cookbook data that I want to send to the server. After this is done, I need to change the source_path node attribute:

  - name: default
      - recipe[packer_payload::default]
        source_path: /tmp/kitchen/data
        app_version: <%= ENV['KITCHEN_APP_VERSION'] %>

Also note the ERB in app_version – this is an environment variable passed in from Rake, which gets the data from the VanclueverHello::VERSION module in the gem code. More on this below.


As mentioned, the cookbook’s is fully functional, and the cookbook can be tested using the following command:

 AWS_KITCHEN_AMI_ID=ami-123456abcd \ \
 ../../bin/kitchen verify

Note the environment variables. Depending on the target being tested for, kitchen-ec2 may have trouble finding an AMI for the target. The one I seem to have the most luck with right now is Ubuntu Trusty (14.04) – but I wanted to try this against some of the more recent versions like Wily. This necessitated the need to supply the login user and the AMI ID, which I do through environment so that it can be parameterized. Also, I pass the app version, which helps show how I can control the version of the gem that gets installed. Also, it is a popular pattern to name EC2 or other cloud Kitchen config files as and call them by passing the KITCHEN_YAML environment variable.

All of this is also in the Rakefile, under the kitchen task. By doing this, I don’t have to worry about running this from the command line all the time. Further to that it takes the work out of determining the AMI to use (more on that later).

The Packer template

The Packer template lives in the packer/ami.json file, sitting at a nice 43 lines, fully beautified. It has variables that are used to tell Packer what AMI to get, what region to set it up in, and also some things to add to the description, such as the distribution and application version.

The Packer template is probably the most simplest part of this setup. All that is needed to kick off the AMI creation is a packer build packer/ami.json. Of course, the parameters need to be passed via environment variables though, but the Rakefile handles that.

Provisioners – artifact delivery and Chef

One thing that I will note about the Packer template is how it does its configuration work on the AMI – this is done via what are known in Packer as provisioners:

"provisioners": [
    "type": "shell",
    "inline": ["mkdir /tmp/gem_pkg"]
    "type": "file",
    "source": "pkg/vancluever_hello-{{user `app_version`}}.gem",
    "destination": "/tmp/gem_pkg/vancluever_hello-{{user `app_version`}}.gem"
    "type": "chef-solo",
    "cookbook_paths": ["berks-cookbooks"],
    "run_list": ["packer_payload"],
    "json": {
      "packer_payload": {
        "app_version": "{{user `app_version`}}"

Note the first two – the shell and file provisioners, which deliver the artifact. Creation of the directory is necessary here (and Packer won’t fail if the directory does not exist – something that created about an extra 2 hours of troubleshooting work for me as I was making this example). The next one, the chef-solo provisioner, runs the packer_payload cookbook to configure things.

Note the cookbooks directory. It’s not cookbooks, but instead berks-cookbooks. This is because I’m staging the full, dependency evaluated cookbook collection in the berks-cookbooks directory, via Berkshelf. This is handled by the Rakefile ahead of the execution of Packer. I haven’t dived into the Rakefile yet, but I won’t just yet – first off, I want to introduce the star of the show.


One last thing before I move on. Tagging the AMI is important! This allows a search on this AMI afterwards. Also, it keeps me from having to parse the Packer logs for the AMI ID, which even though Packer makes easier with a machine readable output flag, I still find tagging to be less work to do (no capturing output or having to save a log file). It also gets one into the habit of tagging resources, which should be done anyway.

Note that it doesn’t have to be the application ID that is the tag or the “artifact”. In addition to this, one could also tag the build ID – which can provide even further granularity when searching on AMIs.


Finally I get to the headliner. Terraform.

The Terraform file sits in the terraform/ file. This is only a single file, but it can be broken up in this directory into as much as it makes sense to break up such things. For example, a lot of my projects now have a,,, and more, depending on how things need to look. This allows for code chunks that are easy to read. As long as they are all in the same directory, Terraform will treat them all as the same plan.

Looking at the file, one may notice several analogs to the CloudFormation template from my last article. The key differences, save the difference in infrastructure to remove things that were not necessary for this article, are that it’s not JSON, and also that there is only one aws_instance resource, with a count of 3. count is a special Terraform DSL attribute that tells Terraform to make more than one of the specified resource. This is referenced in the aws_elb block too, with a splat operator:

resource "aws_elb" "elb" {

  instances = ["${aws_instance.web_servers.*.id}"]

Basically allowing me to reference all the instance IDs at once.

And of course, the Terraform file is parameterized. There are 3 parameters – region, ami_id, and vpc_subnet. region and ami_id are both required as they don’t have a default, but vpc_subnet does not need to be supplied if the default network is okay.

Manually, if the variables were supplied as TF_VAR_ environment variables (ie: TF_VAR_region or TF_VAR_ami_id), one could just run terraform apply and watch this thing go. In reality though, I want this going through Rake, and that’s what I do.

The Rakefile

Maybe not the star of the show, but definitely the thing keeping the lights on, is the Rakefile.

In addition to having a full DSL of its own to run builds with, Rake can be extended with standard Ruby. This can come in the form of simple methods within the Rakefile, or full-on helper libraries that can provide a suite of common tasks for a project. As mentioned, I used the bundler/gem_tasks helper to provide the basic RubyGems building tasks (and if in the Rakefile I have even disabled a few to ensure the gem doesn’t get accidentally pushed).

Incidentally, the Rake tasks make up only a small portion of this Rakefile. There are 4 user-defined tasks, berks-cookbooks, ami, infrastructure, and kitchen.

ami calls two prerequisites, build and berks-cookbooks. The former is a gem task, which builds the gem and puts it into the pkg/ directory. The second runs Berkshelf on the cookbooks/packer_payload/Berksfile file, Vendoring the cookbooks into the berks-cookbooks directory so that Packer has all cookbooks available to it during its chef-solo run. After these are done, packer can run, after getting some variables, of course. The same goes for the infrastructure task, which has no prerequisites, but sends some variables to the terraform command. Finally, the kitchen task allows for me to easily run tests on the packer_payload cookbook.

The Rakefile helper methods

This is where things really come together. There are a couple of kinds of helpers that I have here. The first kind are very simple and handle a few variables from the environment. Honestly, if Rake had something better to handle this, I would use that instead, but from what I’ve seen, it doesn’t (a bit more RTFM may be necessary). Also, I would rather use environment instead of the parameter system that Rake uses by default to provide parameters – it’s more in line with what the rest of our toolchain uses.

The second kind is where Rake really shines though. These are the functions ubuntu_ami_id, app_ami_id, and rfc3339_to_unix:

  • ubuntu_ami_id uses the ubuntu_ami gem to find the latest Ubuntu AMI for the distribution (default trusty) and root store type (ebs-ssd). This is fed to Packer.
  • After Packer is done and has tagged the AMI with our application tag, app_ami_id can go and get the latest built image for our system. Sorting is helped by the rfc3339_to_unix, which helps convert the timestamp. This AMI ID is then fed to Terraform.

The tasks to deploy

Finally, after all of this write up, what is needed to make this run? Three very easy commands:

  • After mirroring the repo, bundle install --binstubs --path vendor/bundle ensures all the dependencies are gathered up within the working directory tree.
  • Then, bundle exec rake ami will build the AMI.
  • Finally, bundle exec rake infrastructure will deploy the infrastructure with Terraform.

And presto! A 3-node ELB cluster in AWS. The Terraform output will have the DNS name of the ELB – after a few minutes when everything is available, connect over HTTP to port 4567 to see what has been created – refresh to see the page cycle through the IP addresses.

Destroying the infrastructure

After completing the exercise, I want to shut down these resources to ensure that they are not going to rack up a nice big bill for me. This is easily done with the setup I have:

TF_CMD=destroy bundle exec rake infrastructure

This will destroy all created resources. Afterwards, I delete the AMI and the snapshot that Packer made using the AWS CLI or console. And it’s like it never existed!

Final Word – the terraform.tfstate File

One important thing about this file. The terraform.tfstate contains a working state of the infrastructure and must be treated with respect. This is the part that’s actually hidden from when CloudFormation is used as AWS handles this part.

The terraform.tfstate file can be managed in one of two ways:

  • By checking it into source control (if using the example repo as a starting point, note that it is, by default, in the .gitignore file).
  • By using remote state to store the config. There are several options, such as S3, Consul, or Atlas, and a few others not mentioned here. Remote state also has the advantage of easy retrieval for use with other projects (other Terraform files, for example, can access its outputs).

Final Final Word – Modules

One thing that didn’t get mentioned here at all has been Terraform’s ability to use modules.

I suggest checking this out if you plan to really dive into Terraform. So much of your repeatable infrastructure code can be put in a module. In fact, the template in this example can serve as a module, if it lived in its own repository – then, all the template that referenced it would need is the 3 variables defined at the top, which would ultimately turn the file in its referencing project into an approximately half-dozen lines of code.

Well, that’s all for this article! As usual, I hope you found the material informative! I hope to get back on track with my initial intention of evaluating AWS services soon, but don’t hold it against me if things are slow. I have been far from inactive though – I will mention some of the things I have been up to in my next post. Until then, take care!

AWS Basics Using CloudFormation (Part 3) – ELB and EC2

This is the third part of a 3-part article covering the basics of AWS through using CloudFormation. For the first part of this article, click here, and for the second, click here.

This is the third and final part in my AWS basics article. So far, I’ve covered CloudFormation and Amazon VPC. This time, I will cover Elastic Load Balancing (ELB), and Amazon EC2, the actual operational pieces that end up getting deployed and serve the web content. The final product is a basic virtual datacenter that load balances across two web servers, deployable with a single command through CloudFormation.

And once more, if you would like to follow along at home, remember to check out the template on the GitHub project.

Elastic Load Balancing (ELBs)

Elastic Load Balancing is AWS’s layer 7 load balancing component of EC2, facilitating the basic application redundancy features that most modern applications need today.

ELB has a feature set that is pretty much what could be expected from a traditional layer 7 load balancer, such as SSL offloading, health checks, sticky sessions, and what not. However, the real fun in using ELB is in what it does to make the job of infrastructure management easier.

As a completely integrated platform service, ELBs are automatically redundant, and can span multiple availability zones without much extra configuration. Metrics and logging are also built in, and can be sent to S3 or CloudWatch.

Other than that, there is not much to really hype up about ELB. Not to say that is a bad thing! So on with the CloudFormation entries.

ELBs in CloudFormation

After the gauntlet I ran with explaining the VPC entries in the sample CloudFormation stack, the ELB entry will be a breeze. Below is the ELB section.

"VCTSLabELB1": {
  "Type": "AWS::ElasticLoadBalancing::LoadBalancer",
  "Properties": {
     "HealthCheck": {
       "HealthyThreshold": "2",
       "Interval": "5",
       "Target": "HTTP:80/",
       "Timeout": "3",
       "UnhealthyThreshold": "2"
     "Listeners": [{
         "InstancePort": "80",
         "InstanceProtocol": "HTTP",
         "LoadBalancerPort": "80",
         "Protocol": "HTTP"
     "Scheme": "internet-facing",
     "Subnets": [ { "Ref": "VCTSLabSubnet1" } ],
     "SecurityGroups": [ { "Ref": "VCTSElbSecurityGroup" } ],
     "Instances": [
       { "Ref": "VCTSLabSrv1" },
       { "Ref": "VCTSLabSrv2" }
     "Tags": [ { "Key": "resclass", "Value": "vcts-lab-elb" } ]

The resource is of the AWS::ElasticLoadBalancing::LoadBalancer type. It is an internet-facing load balancer (as defined by Scheme), as opposed to an internal load balancer that would only be visible within the VPC. It’s also associated with the VCTSLabSubnet1 subnet, so that it can have public access, it does not affect the instances that it can connect to. The instances are defined in the Instances property, which contain references to the two named instances in the EC2 section of the template.

Health checking

The HealthCheck property marks an individual service as healthy (defined by HealthyThreshold) after 2 checks, which brings it back into the cluster; subsequently the health check will also mark a service as unhealthy after 2 failures (defined by UnhealthyThreshold). Note that although this is okay for the purpose that I am using it for, intermittent service failures may cause an undesirable flapping when thresholds are set this low. In that event, set HealthyThreshold to a value that ensures there have been enough successful checks to reasonably determine that the service is available.

Timeout controls how long to wait before marking an individual service as down if a response has not been received. Interval is the time to wait between checks. Both of these values are in seconds. In the example above, the health check waits 3 seconds before marking a service as failed, and the health check itself runs every 5 seconds.

The health check Target takes the syntax of SERVICE:PORT/urlpath. SERVICE can be one of TCP, SSL, HTTP, and HTTPS. /urlpath is only available for the last two (the first two being simple connect open checks and lacking any protocol awareness other than SSL). Also, the response to /urlpath needs to be a 200 OK response – anything else (even a 300 Redirect class code) is considered a failure. In the example above, a check against / over HTTP will be done on any EC2 instances to be sure that the service is up.


The listener describes how clients connect to the load balancer and how those connects are routed to instances.

Here, connections come in to port 80 (defined by LoadBalancerPort) and are handled as HTTP connections (defined by Protocol). There are implications from this; namely the X-Forwarded-For HTTP header will be passed, and the connection is statefully passed across as a proxy. Use of HTTP on the front end also means that HTTP or HTTPS needs to be used on the back end. This is indeed the case; the listener is configured to send traffic to instances via HTTP on port 80 (defined by InstanceProtocol and InstancePort).

There are topics that are not covered in this article; namely having to do with SSL offloading (ie: using HTTPS as the front end or instance protocols), persistence, and back-end authentication. It would be wise to check out the Listeners for Your Load Balancer section of the ELB manual to get an idea of all available configurations for listeners.


There was a time, albeit a long time ago, that AWS was simply EC2 and not much else. Although, it should be noted that SQS was the first AWS service; Jeff Barr’s article on his first 12 years at Amazon is a good read on the launch dates of SQS, EC2, and S3.

Even in the face of today’s AWS massive platform service portfolio, I personally think it’s safe to say that EC2 still has a very major place at Amazon. It serves as the building block for services like ECS (AWS’s Docker service); the EC2 instances that make an ECS pool are, as of this writing, still visible to the end user and require some degree of management. Custom workloads may not fit the bill for use on zero-administration platforms like Lambda. Managed service providers that run their customers off AWS will have a need for the service for quite a long time to come.

EC2 is Amazon’s most basic building block, and the product that gave “Cloud Computing” its name (its acronym itself standing for Elastic Compute Cloud). It is a Xen-based virtualization platform, with features that in today’s world we now take for granted, such as host redundancy and per-use billing, to just name a couple. It set the standard for how a cloud platform handles instances – virtual machines are first rolled into base units called images (which under AWS is called an AMI, standing for Amazon Machine Image), from which instances are created with their own storage laid on top of it.

This small overview does not do the service justice, and there is no way that I would be able to cover all of EC2’s features in this document without losing sight of the goal of setting up a basic VPC with CloudFormation. I would recommend the EC2 documentation for coverage on these topics, in addition to watching this space, where I will more than likely cover these topics as need be.

EC2 in CloudFormation

And now, finally, I come to the last section in this part of the series – the EC2 section of the sample CloudFormation template.

Below is the definition of one of the two EC2 instances that are set up in the template, not counting the NAT instance.

"VCTSLabSrv1": {
  "Type": "AWS::EC2::Instance",
  "Properties": {
    "ImageId": { "Fn::FindInMap": [ "RegionMap", { "Ref": "AWS::Region" }, "AMI" ] },
    "InstanceType": "t2.micro",
    "KeyName": { "Ref": "KeyPair" },
    "SubnetId": { "Ref": "VCTSLabSubnet2" },
    "SecurityGroupIds": [ { "Ref": "VCTSPrivateSecurityGroup" } ],
    "Tags": [ { "Key": "resclass", "Value": "vcts-lab-srv" } ],
    "UserData": { "Fn::Base64": { "Fn::Join": [ "", [
      "#!/bin/bash -xen",
      "/usr/bin/yum -y updaten",
      "/usr/bin/yum -y install httpdn",
      "/sbin/chkconfig httpd onn",
      "echo '<html><head></head><body>vcts-lab-srv1</body></html>' > /var/www/html/index.htmln",
      "echo "/opt/aws/bin/cfn-signal -e $? ",
      "  --stack ", { "Ref": "AWS::StackName" }, " ",
      "  --resource VCTSLabSrv1 ",
      "  --region ", { "Ref": "AWS::Region" }, " ",
      "  && sed -i 's#^/opt/aws/bin/cfn-signal .*\$##g' ",
      "  /etc/rc.local" >> /etc/rc.localn",
  "CreationPolicy" : { "ResourceSignal" : { "Count" : 1, "Timeout" : "PT10M" } },
  "DependsOn": "VCTSLabNatGw"

EC2 instances are defined as the AWS::EC2::Instance instance type. The instance type is t2.micro, the smallest of the newer generation T2 instance types. Also, remember from the Mappings part of the CloudFormation section that the actual AMI to use is selected from the RegionMap map, based off the availability zone that this instance is launched in.

The KeyName is chosen from the supplied key name when the CloudFormation template was launched (it was either supplied on the command line or through the CloudFormation web interface).

The subnet (specified by SubnetId) is VCTSLabSubnet2, the private subnet, along with its SecurityGroupIds, which is in this case is the VCTSPrivateSecurityGroup private subnet security group (which is simply an allow all, as this group will have no internet access and will be interfacing with the NAT instance and the ELB).

Using userdata for post-creation work

The section after all the other aforementioned properties is where some of the real magic happens. The UserData property is used to create a post-installation shell script that updates the system (/usr/bin/yum update), installs apache (/usr/bin/yum -y install httpd), enables the service (/sbin/chkconfig httpd on), creates an index.html page with the server ID, and then finally injects a self-destructing cfn-signal command that gets run when the server reboots. This is a very simple way to get a fully deployed server in our example.

Note that there is a more complex configuration management system built right right into CloudFormation if using something more complex like Chef, Puppet, Ansible, Salt, or whatever is not possible. Check out AWS::CloudFormation::Init. Incidentially, this requires the cfn-init command be launched, which is not necessarily installed on all Linux AMIs (however is available usually thru pacakges, and is already on the system with Amazon Linux). Incidentally as well, cfn-init is generally launched through user data.

Finally, also note that user data needs to be base64 encoded – this is done by the Fn::Base64 section in the example.

Creation policies and dependencies

The last little bit that needs to be mentioned in regards to the EC2 instances are the creation policies and dependencies attached to them. These are not unique to EC2 instances (and hence, they are not properties of that specific resource type, as can be seen from their scope).

Consider the following scenario: the NAT instance has generally the same UserData as the EC2 instances – it updates and reboots as well. During the period that the NAT instance is rebooting, internet access will be unavailable to the 2 web instances in the private subnet. If all 3 instances were set to install at the same time without the web instances waiting for the NAT instance, it is plausible that there would be a time where the web instances would be attempting updates while the NAT instance was rebooting. This would, of course, break updates, and possibly the creation of the CloudFormation stack.

This is what creation policies and dependencies are for. Generally, when using user data, one does not want to count a resource as created until everything is done. In this case, that means the instance has had all of its software updated, any other software installed that it needs (ie: in the event of the web instances), and has been fully rebooted.

The CreationPolicy defined above waits until that is all done. Ultimately, by what it is defined there, it waits for one cfn-signal command to be run for the resource (defined by ResourceSignal), with a 10 minute Timeout (if the format looks weird, it’s because it is in ISO 8601 format). This gives the node enough time to fully update and restart.

And finally, the DependsOn attribute ties the web instances to the NAT instance. This will ensure CloudFormation waits until the NAT instance (referring to it by its named resource, VCTSLabNatGw) has completed creation and received its own cfn-signal before even attempting to create them, giving us an error-free template!


This concludes the intro article. I hope that you found the material informative!

Watch this space for much more in the way of coverage of AWS services as I continue my “world tour”. Not going to say 100% about what is next, but more than likely Route 53 will be on the radar shortly, as possibly will be an introduction to Identity and Access Management and Security Token Service, as both of the latter services are pretty important when organizing security on an AWS account these days, and there is a lot to digest.

See you then!

AWS Basics Using CloudFormation (Part 2) – Amazon VPC

This is the second part of a 3-part article covering the basics of AWS through using CloudFormation. For the first part of this article, click here.

Last week I covered the basics of CloudFormation – giving an idea of how a template is generally structured and all of the specific elements. This time, I am going into detail on how to set up Amazon VPC. Again, Amazon VPC is the first part of almost any AWS deployment – think of it as the datacenter and network layer. Not much else can happen unless these two items are present, can they? There are a few exceptions to this rule, such as Route 53, Amazon’s DNS hosting service, but even Route 53 can be provisioned in a VPC to provide private hosted zones that are not externally available.

And again, if you would like to follow along at home, remember to check out the template on the GitHub project.

Regions and Availability Zones

Two key concepts that should be explained while discussing VPCs are the concepts of regions and availability zones.

As a modern computing infrastructure that serves a wide variety of clients, AWS datacenters are spread throughout the world. These are known as regions. Examples of the regions would be the AWS datacenters in Virginia (us-east-1), North California (us-west-1), and Oregon (us-west-2).

Within each of these regions are availability zones, which according to the EC2 FAQ, are so separated from each other that even physical failures such as power outages or even fire at one availability zone would not affect others.

VPCs can span availability zones but not regions. In order to interconnect regions, a peer needs to be set up, or the VPCs need to be connected via other means, such as using a VPN.

Private and Public Networks

It takes a little getting used to how private and public networks work in a VPC if one is used to engineering their own networks.

First off, the difference between a private and a public subnet is very subtle – public subnets have an internet gateway attached to them, and instances require a public IP address attached to them to be able to get internet access. Instances that do not have public IPs cannot access the internet on their own, regardless if they are in a public subnet.

To solve the problem, one can use a NAT instance. The concept of this explained below. What I do not cover are some of the other considerations that need to be thought of because of this “manual” process, such as redundancy and security of the network. Hopefully, Amazon will consider automating this piece of the infrastructure soon, as it is the single manual setup element in the VPC platform and hence probably the one prone to failure the most.

Advanced VPC topics

I do not cover more in-depth VPC topics here, however due to the scale of some of Amazon’s customers, it is only natural that they would have a wide range of options for connecting enterprise networks to a VPC.

Consult the Network Administrator Guide for help on these integration topics (such as using VPNs, or routing protocols like BGP).

VPCs in CloudFormation

As there are a lot of VPC components in the CloudFormation template, I have broken it up some of the items into respective sub-sections.

The root VPC resource is defined as an AWS::EC2::VPC resource:

"VCTSLabVPC1": {
  "Type": "AWS::EC2::VPC",
  "Properties": {
    "CidrBlock": "",
    "EnableDnsHostnames": true,
    "Tags": [ { "Key": "resclass", "Value": "vcts-lab-vpc" } ]

A couple of other things to note here: CidrBlock cannot be any bigger than a /16, so if you get errors mentioning something about the network address being invalid, try reducing the network size. EnableDnsHostnames allows DNS hosts to be assigned to instances as they start up in this VPC – having this off may be useful if DNS will be managed outside of AWS, but otherwise it’s generally a good idea to enable this.

Subnets, gateways, and routes

There are a couple of examples on subnets in the CloudFormation template, since it makes use of a NAT instance as well. This section will discuss the default (public) subnet to start with, and I will expand into the private subnet during the NAT instance section.

"VCTSLabSubnet1": {
"Type": "AWS::EC2::Subnet",
  "Properties": {
    "CidrBlock": "",
    "MapPublicIpOnLaunch": true,
    "Tags": [
      { "Key": "resclass", "Value": "vcts-lab-subnet" },
      { "Key": "subnet-type", "Value": "public" }
    "VpcId": { "Ref": "VCTSLabVPC1" }
"VCTSLabGateway": {
  "Type": "AWS::EC2::InternetGateway",
  "Properties": {
    "Tags": [ { "Key": "resclass", "Value": "vcts-lab-gateway" } ]
"VCTSLabGatewayAttachment": {
  "Type": "AWS::EC2::VPCGatewayAttachment",
  "Properties": {
    "InternetGatewayId": { "Ref": "VCTSLabGateway" },
    "VpcId": { "Ref": "VCTSLabVPC1" }
"VCTSLabPublicRouteTable": {
  "Type": "AWS::EC2::RouteTable",
  "Properties": {
    "VpcId": { "Ref": "VCTSLabVPC1" },
    "Tags": [
      { "Key": "resclass", "Value": "vcts-lab-routetable" },
      { "Key": "routetable-type", "Value": "public" }
"VCTSLabPublicDefaultRoute": {
  "Type": "AWS::EC2::Route",
  "Properties": {
    "DestinationCidrBlock": "",
    "GatewayId": { "Ref": "VCTSLabGateway" },
    "RouteTableId": { "Ref": "VCTSLabPublicRouteTable" }
"VCTSLabPublicSubnet1Assoc": {
  "Type": "AWS::EC2::SubnetRouteTableAssociation",
  "Properties": {
    "SubnetId": { "Ref": "VCTSLabSubnet1" },
    "RouteTableId": { "Ref": "VCTSLabPublicRouteTable" }

There are a lot of things going on here. First, the subnet is defined with the AWS::EC2::Subnet resource type. The MapPublicIpOnLaunch property makes this the de facto public subnet, as anything launched in this subnet will get a public IP address. However, this is only part of the story, as without a gateway, public IP address assignments will not be possible or functional.

To that end, various routing resoruces are created: a gateway (type AWS::EC2::InternetGateway), a route table (AWS::EC2::RouteTable), and a default route (resource type AWS::EC2::Route). This effectively makes a route table with a default route, however it also needs to be attached to the gateway that is created and the subnet. This is done by using the AWS::EC2::VPCGatewayAttachment and AWS::EC2::SubnetRouteTableAssociation resources.

At this point, the subnet is now ready for use, however, without security policies may be not of much use, or extremely insecure.

Security groups

Shown below is the CloudFormation resource for the public subnet’s security group. This is defined as type AWS::EC2::SecurityGroup, and is named for the fact that it will mainly run on the NAT instance only.

"VCTSNatSecurityGroup": {
  "Type": "AWS::EC2::SecurityGroup",
  "Properties": {
    "Tags": [ { "Key": "resclass", "Value": "vcts-lab-sg" } ],
    "GroupDescription": "NAT (External) VCTS security group",
    "VpcId": { "Ref": "VCTSLabVPC1" },
    "SecurityGroupIngress": [
      { "IpProtocol": "tcp", "CidrIp": "", "FromPort": "80", "ToPort": "80" },
      { "IpProtocol": "tcp", "CidrIp": "", "FromPort": "443", "ToPort": "443" },
      { "IpProtocol": "tcp", "CidrIp": { "Ref": "SSHAllowIPAddress" }, "FromPort": "22", "ToPort": "22" }
    "SecurityGroupEgress": [
      { "IpProtocol": "tcp", "CidrIp": "", "FromPort": "22", "ToPort": "22" },
      { "IpProtocol": "tcp", "CidrIp": "", "FromPort": "80", "ToPort": "80" },
      { "IpProtocol": "tcp", "CidrIp": "", "FromPort": "443", "ToPort": "443" }

Access rules are defined through the SecurityGroupIngress (inbound) and SecurityGroupEgress (outbound) properties. Here, only SSH traffic is being allowed inbound, from the IP address supplied via the SSHAllowIPAddress parameter. SSH is also being allowed general outbound – this allows the ability to “bounce” off the NAT instance into the private network. HTTP and HTTPS are being allowed both inbound and outbound generally – this might be confusing at first, seeing as there is some redundancy with this and the (not shown) load balancer access list, but consider the fact that the NAT instance has to handle traffic going in both directions – so the HTTP will need to come in to the NAT instance and the back out to the internet, so the access list has to allow for both directions.

Two security groups are not shown here – the load balancer group (named VCTSElbSecurityGroup) and the private group (named VCTSPrivateSecurityGroup). The former simply allows HTTP traffic generally, and the latter is a general allow for any traffic flowing into private instances.

One last thing to note – keep in mind that there are two kinds of network security concepts in a VPC – security groups, as shown here, and network ACLs, which I do not discuss. The former are applied at the instance level, and the latter are applied at the subnet level. At the very least, there should be some security applied on the instance level, however, adding the network ACLs can allow for some fallback barring that. As an example, the private subnet is very loose, and could benefit from an ACL being applied to it, just in case a public address got assigned to it (even though, as the CloudFormation template is currently set up, that is impossible). The VPC Security Comparison document gives a great breakdown on the differences.

NAT instances

Below are the CloudFormation template snippets for the NAT instance. There is some information overlap here, as the subnets and route table items have mainly been explained already, and the network security group has already been shown above, so it is not being shown again. So given that, I will just show the relevant subnet and route entries, and then briefly explain the NAT instance itself, which is an EC2 instance, and of course I will be describing EC2 in detail later on.

Here are the private subnet and routes. Note how MapPublicIpOnLaunch is off. Also, there is a hook into the availability zone that the public subnet is in, as load balancing can break if the private and public subnets are inconsistently created in different availability zones. Also, the private subnet uses the NAT instance as the default route.

"VCTSLabSubnet2": {
"Type": "AWS::EC2::Subnet",
  "Properties": {
    "CidrBlock": "",
    "MapPublicIpOnLaunch": false,
    "Tags": [
      { "Key": "resclass", "Value": "vcts-lab-subnet" },
      { "Key": "subnet-type", "Value": "private" }
    "VpcId": { "Ref": "VCTSLabVPC1" },
    "AvailabilityZone": { "Fn::GetAtt" : [ "VCTSLabSubnet1", "AvailabilityZone" ] }
"VCTSLabPrivateRouteTable": {
  "Type": "AWS::EC2::RouteTable",
  "Properties": {
    "VpcId": { "Ref": "VCTSLabVPC1" },
    "Tags": [
      { "Key": "resclass", "Value": "vcts-lab-routetable" },
      { "Key": "routetable-type", "Value": "private" }
"VCTSLabPrivateDefaultRoute": {
  "Type": "AWS::EC2::Route",
  "Properties": {
    "DestinationCidrBlock": "",
    "InstanceId": { "Ref": "VCTSLabNatGw" },
    "RouteTableId": { "Ref": "VCTSLabPrivateRouteTable" }
"VCTSLabPrivateSubnet2Assoc": {
  "Type": "AWS::EC2::SubnetRouteTableAssociation",
  "Properties": {
    "SubnetId": { "Ref": "VCTSLabSubnet2" },
    "RouteTableId": { "Ref": "VCTSLabPrivateRouteTable" }

And here is the NAT instance itself. Again, I am not describing it here in detail, as it is an EC2 instance and will be explained in its own section. However, do note that the AMI does map to a list of Amazon Linux instances specifically configured for NAT use – there are scripts on these instances that set up the NAT table and IP forwarding. Also, the NAT instance does not have 2 interfaces in each of the subnets, like a traditional router – traffic flows in through the VPC’s own routers in a way that only an IP in the public subnet is required.

"VCTSLabNatGw": {
  "Type": "AWS::EC2::Instance",
  "Properties": {
    "ImageId": { "Fn::FindInMap": [ "NatRegionMap", { "Ref": "AWS::Region" }, "AMI" ] },
    "InstanceType": "t2.micro",
    "KeyName": { "Ref": "KeyPair" },
    "SubnetId": { "Ref": "VCTSLabSubnet1" },
    "SourceDestCheck": false,
    "SecurityGroupIds": [ { "Ref": "VCTSNatSecurityGroup" } ],
    "Tags": [ { "Key": "resclass", "Value": "vcts-lab-natgw" } ],
    "UserData": { "Fn::Base64": { "Fn::Join": [ "", [
      "#!/bin/bash -xen",
      "/usr/bin/yum -y updaten",
      "echo "/opt/aws/bin/cfn-signal -e $? ",
      "  --stack ", { "Ref": "AWS::StackName" }, " ",
      "  --resource VCTSLabNatGw ",
      "  --region ", { "Ref": "AWS::Region" }, " ",
      "  && sed -i 's#^/opt/aws/bin/cfn-signal .*\$##g' ",
      "  /etc/rc.local" >> /etc/rc.localn",
  "CreationPolicy" : { "ResourceSignal" : { "Count" : 1, "Timeout" : "PT10M" } }

Next Article – ELB and EC2

Stay tuned for the conclusion of this 3-part article, were I discuss setting up ELB and EC2 and their respective items in the CloudFormation template!

AWS Basics Using CloudFormation (Part 1) – Introduction to CloudFormation

This article is the first in many – as mentioned in the last article, I will be writing more articles over the course of the next several months on AWS, touching on as much of the service as I can get my hands on.

For my first article, I am starting with the basics – CloudFormation, Amazon VPC (Virtual Private Cloud), Elastic Load Balancing, and finally, EC2. The services that are covered here serve as some of the basic building blocks of an Amazon infrastructure, and some of the oldest components of AWS. This will serve as a entry point not only into further articles, but for myself, and you the reader, into learning more about AWS, and being more comfortable with the tools that manage it.

However, this article got so large that I have had to separate it into 3 parts! so, for the first article, I will be mainly covering CloudFormation, the second one will cover VPC, and the third one will cover ELB and EC2.

Viewing the Technical Demo

All of the items covered in this article have been assembled into a CloudFormation template that can be downloaded from the github page:

There is a README there that provides instructions on how to download and use the template.


I selected the first features of AWS to cover from a way that could give someone that is already familiar with the basic concepts of modern cloud computing and devops (which includes virtual infrastructure, automation, and configuration management) an idea of what that means when dealing with AWS and its products. Ultimately, this meant creating an example that would create a full running basic “application” that could be created and destroyed with a single command.

CloudFormation is Amazon’s primary orchestration product, and covers a wide range of services that make up the core of AWS’s infrastructure. It is used in this article to manage every service I touch – besides IAM and access keys, which are not covered here, nothing in this example has been set up through the AWS console. Given that the aforementioned two items have been set up, all that is necessary to create the example is a simple aws cloudformation CLI command.

Amazon VPC is the modern (and current) virtual datacenter platform that makes up the base of AWS. From a VPC, networks, gateways, access lists, and peer connections (such as VPN endpoints and more) are made to cover both the needs of a public-facing application and the private enterprise. It is pretty much impossible to have a conversation about AWS these days without using VPC.

Amazon EC2 is one of Amazon’s oldest and most important products. It is the solution that gave “the cloud” its name, and while Amazon has created a large number of platform services that have removed the need for EC2 in quite a few applications (indeed, one can run an entire application these days in AWS without a single EC2 instance), it is still highly relevant, and will continue to be so long as there is ever a need to run a server and not a service. Products such as VPC NAT instances (covered in part 2) and Amazon EC2 Container Service (not covered here) also use EC2 directly with no transparency, so its importance in the service are still directly visible to the user.

I put these three products together in this article – with CloudFormation, a VPC is created. This VPC has two subnets, a public subnet and a private subnet, along with a NAT instance, so that one can see some of the gotchas that can be encountered when setting up such infrastructure (and hopefully avoid some of the frustration that I experienced, mentioned in the appropriate section). An ELB is also created for two EC2 instances that will, upon creation, do some basic configuration to make themselves available over HTTP and serve up a simple static page that allows one to see both the ELB and EC2 instances in action.


CloudFormation is Amazon’s #1 infrastructure management service. With features that cover both deployment and configuration management, the service supports over two dozen AWS products, and can be extended to support external resources (and AWS processes not directly supported by CloudFormation) via custom resources.

One does not necessarily need to start off with CloudFormation completely from scratch. There are templates available at the AWS CloudFormation Templates page that have both examples of full stacks and individual snippets of various AWS services, which can be a great time saver in building custom templates.

The following few sections cover CloudFormation elements in further detail. It is a good idea to consult the general CloudFormation User Guide, which provides a supplemental to the information below, and also a good reference while designing templates, be it starting from scratch or using existing templates.

CloudFormation syntax synopsis

Most CloudFormation items (aside from the root items like template version and description) can be summarized as being an name/type pairing. Basically, given any certain type, be it parameters, resources, mappings, or anything else, items in CloudFormation generally are assigned a unique name, and then a type. Consider the following example parameter:

"Parameters": {
  "KeyPair": {
    "Type": "AWS::EC2::KeyPair::KeyName",
    "Description": "SSH key that will be used for EC2 instances (set up in web console)",
    "ConstraintDescription": "needs to be an existing EC2 keypair (set up in web console)"

This parameter is a AWS::EC2::KeyPair::KeyName parameter named KeyPair. The latter name can (and will be) referenced in resources, like the EC2 instance names (see the below section on EC2).

Look in the below sections for CloudFormation’s Ref function, which will be used several times; this function serves as the basis for referencing several kinds of CloudFormation elements, not just parameters.

Parameters and Outputs

Parameters are how data gets in to a CloudFormation template. This can be used to do things like get IDs of SSH keys to assign to instances (as shown above), or IP addresses to assign to security group ACLs. These are the two items parameters are used for in the example.

Outputs are how data gets out of CloudFormation. Data that is a useful candidate for being published through outputs include instance IP addresses, ELB host names, VPC IDs, and anything else that may be useful to a process outside of CloudFormation. This data can be read thru the UI, or through the JSON data produced by the aws cloudformation describe-stacks CLI command (and probably the API as well).

Parameter syntax

Let’s look at the other example in the CloudFormation template, the SSHAllowIPAddress parameter. This example uses more generic data types and gives a bigger picture as to what is possible with parameters. Note that there are several data types that can be used, which include both typical generic data types, such as Strings and Integers, and AWS-speciifc types such as the AWS::EC2::KeyPair::KeyName parameter used above.

"SSHAllowIPAddress": {
  "Type": "String",
  "AllowedPattern": "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\/32",
  "Description": "IP address to allow SSH from (only /32s allowed)",
  "ConstraintDescription": "needs to be in A.B.C.D/32 form"

This parameter is of type String, which also means that the AllowedPattern constraint can be used on it, which is used here to create a dotted quad regular expression, with the /32 netmask being explicitly enforced. JSON/Javascript syntax applies here, which explains the somewhat excessive nature of the backslashes.

Parameters are referenced using the Ref function. The snippet below gives an example of the SSHAllowIPAddress‘s reference:

"VCTSNatSecurityGroup": {
  "Type": "AWS::EC2::SecurityGroup",
  "Properties": {
    "SecurityGroupIngress": [
      { "IpProtocol": "tcp", "CidrIp": { "Ref": "SSHAllowIPAddress" }, "FromPort": "22", "ToPort": "22" }

Ref is a very simple function and usually just used to refer back to a CloudFormation element. It is not just restricted to parameters is used with both parameters, mappings, and resources. Further examples will be given below, so there should be a good idea on how to use it by the end of this article.

Output syntax

Below is the NatIPAddr output, pulled from the example.

"Outputs": {
  "NatIPAddr": {
    "Description": "IP address of the NAT instance (shell to this address)",
    "Value": { "Fn::GetAtt": [ "VCTSLabNatGw", "PublicIp" ] }

The nature of outputs are pretty simple. The data can be pulled any way that allows one to get the needed data. Most commonly, this will be from the Fn::GetAtt function, which can be used to get various attributes from resources, or possibly Ref, which in the event of resources, usually references a specific primary attribute.


Mappings allow a CloudFormation template some flexibility. The best example of this is allowing the CloudFormation template to be used in multiple regions, by mapping AMIs (instance images) to their respective regions.

Mapping syntax

This is the one in the reference’s template, and maps to Amazon Linux AMIs. These are chosen because they support cfn-init out of the box, which was going to be used in the CloudFormation template to run some commands via the AWS::CloudFormation::Init resource type in the EC2 section, but I opted to use user data instead (I cover this in further detail in part 3).

"Mappings": {
  "RegionMap": {
    "us-east-1": { "AMI": "ami-1ecae776" },
    "us-west-1": { "AMI": "ami-e7527ed7" },
    "us-west-2": { "AMI": "ami-d114f295" }
  "NatRegionMap": {
    "us-east-1": { "AMI": "ami-303b1458" },
    "us-west-1": { "AMI": "ami-7da94839" },
    "us-west-2": { "AMI": "ami-69ae8259" }

The above RegionMap is then referenced in EC2 instances like so:

"VCTSLabNatGw": {
  "Type": "AWS::EC2::Instance",
  "Properties": {
    "ImageId": { "Fn::FindInMap": [ "NatRegionMap", { "Ref": "AWS::Region" }, "AMI" ] },
    "InstanceType": "t2.micro",

This is one of many ways to use mappings, and more complex structures are possible. Check the documentation for further examples (such as how to expand the above map to make use of processor architecture the the region map).


Resources do the real work of CloudFormation. They create the specific elements of the stack and interface with the parameters, mappings, and outputs to do the work necessary to bring up the stack.

Since resources vary so greatly in what they need in real world examples, I explain each service that the template makes use of in their respective sections (ie: the VPC, ELB, and EC2 sections). However, some common elements are explained here in brief, as to give a primer on how they can be used to further control orchestration of the stack. Again, further detail on how to use these are shown as examples with the various AWS services explained below.

Creation Policies, Dependencies, and Metadata

A CreationPolicy can be used as a constraint to determine when a resource is counted as created. For example, this can be used with cfn-signal on an EC2 instance to ensure that the resource is not marked as CREATE_COMPLETE until all reasonable post-installation work has been done on an instance (for example, after all updates have been applied or certain software has been installed).

A dependency (defined with DependsOn) is a simple association to another resource that ties its creation with said parent. For example, the web server instances in the example do not start creation until the NAT instance is complete, as they are created in a private network and will not install properly unless they have internet access available to them.

Metadata can be used for a number of things. The example commonly explained is the use of the AWS::CloudFormation::Init metadata type to provide data to cfn-init, which is a simple configuration management tool. This is not covered in the example, as the work that is being done is simple enough to be done through UserData.

All of these 3 concepts are touched up on in further detail in part 3, when EC2 and the setup of an instance in CloudFormation is discussed.

Next Article – Amazon VPC

That about covers it for the CloudFormation part of this article. Stay tuned for the next part, in which I cover Amazon VPC basics, in addition to how it is set up in CloudFormation!


AWS NYC Summit 2015 Recap

When one gets the chance to go to New York, one takes it, as far as I’m concerned. So when PayByPhone offered to send me to the AWS NYC Summit, I totally jumped at the chance. In addition to getting to stand on top of two of the world’s tallest buildings, take a bike across the Brooklyn Bridge, and get some decent Times Square shots, I got to learn quite a bit about ops using AWS. Win-win!

The AWS NYC summit was short, but definitely one of the better conferences that I have been to. I did the Taking AWS Operations to the Next Level “boot camp” – a day-long training course – during the pre-day, and attended several of the breakouts the next day. All of them had great value and I took quite a few notes. I am going to try and abridge all of my takeaways on the products, “old” and new, that caught my eye below.


CloudFormation was a product that was covered in my pre-day course and also during one of the breakouts that I attended. It’s probably the most robust ops product that is offered on AWS today, supporting, from my impressions, the most products versus any other of the automation platform services that are offered.

The idea with CloudFormation, of course, is that infrastructure is codified in a JSON-based template, and then create “stacks” – entities that group infrastructure and platform services up in ways that can be duplicated, destroyed, or even updated with a sort of intelligent convergence, adding, removing, or changing resources depending on what has been defined in the template. Naturally, this can be integrated with any sort of source control so that changes are tracked, and a CI and deployment pipeline to further automate things.

One of the neat features that was mentioned in the CloudFormation breakout was the ability for CloudFormation to use Lambda-backed resources to interface with AWS features that CloudFormation does not have native support for, or even non-AWS products. All of this makes CloudFormation definitely seem like the go-to product if one is planning on using native tools to deploy AWS infrastructure.


OpsWorks is Amazon’s Chef product, although it’s a quite a bit more than that. It seems mainly like a hybrid of a deployment system like CloudFormation, with Chef being used to manage configuration through several points of the lifecycle. It uses chef-solo and chef-zero depending on the OS it is being employed for (Linux is chef-solo and Chef 11, and Windows is Chef 12 and chef-zero), and since it is all run locally, there is no Chef server.

In OpsWorks, an application stack is deployed using components called Layers. Layers exist for load balancing, application servers and databases, in addition to others such as caching and even custom ones that can utilize functionality that are created through Chef cookbooks. With support for even some basic monitoring, one can probably write an entire application in OpsWorks without even touching another AWS ops tool.

AWS API Gateway

A few new products were announced at the summit – but API Gateway was the one killer app that caught my eye. Ultimately this means that developers do not really need to mess around with frameworks any more to get an API, or even get a web application off the ground – just hook in the endpoints with API gateway, integrate them with Lambda, and it’s done. With the way that AWS’s platform portfolio is looking these days, I’m surprised that this one was so late the party!

CodeDeploy, CodePipeLine, and CodeCommit

These were presented to me in a breakout that gave a bit of a timeline on how Amazon internally developed their own deployment pipelines. Ultimately they segued into these three tools.

CodeDeploy is designed to deploy an application to not only AWS, but also to on-premise resources, and even other cloud providers. The configuration is YAML-based and pretty easy to read. As part of its deployment feature set, it does offer some orchestration facilities, so there is some overlap with some of its other tools. It also integrates with several kinds of source control platforms (ie: GitHub), other CI tools (ie: Jenkins), and configuration management systems. The killer features for me is its support for rolling updates, to automate the deployment of a new infrastructure while draining out the old one.

CodePipeline is a release modeling and workflow engine, and can be used to model a release process, working with CodeDeploy, to automate the deployment of an application from source, to testing, and then to production. Tests can be automated using services like RunScope or Ghost Inspector, to name just a couple. It definitely seems like these two – CodePipeline and CodeDeploy – are naturally coupled to give a very smooth software deployment pipeline – on AWS especially.

CodeCommit is AWS’s foray into source control, a la GitHub, Bitbucket, etc. Aside all the general things that one would hopefully expect from a hosted source control service (ie: at-rest encryption and high availability), expect a few extra AWS-ish perks, like the ability to use existing IAM scheme to assign ACLs to repositories. Unlimited storage per repository was mentioned, or at least implied, but there does appear to be some metering – see here for pricing.

EC2 Container Service (ECS)

The last breakout I checked out was one on EC2 Container Service (ECS). This is AWS’s Docker integration. The breakout itself spent a bit of time on an intro to containers themselves which is out of the scope of this article, but is a topic I may touch up on at a later time (Docker is on “the list” of things I want to evaluate and write on). Docker is a great concept. It rolls configuration management and containerization into one tool for deploying applications and gives a huge return on infrastructure in the form of speed and density. The one unanswered question has been clustering, but there has been several 3rd party solutions for that for a bit and Docker themselves has a solution that is still fairly new.

ECS does not appear to be Swarm, but its own mechanism. But in addition to clustering, ECS works with other AWS services, such as Elastic Load Balancing (ELB), Elastic Block Storage (EBS), and back office things like IAM and CloudTrail. Templates are rolled into entities called tasks, where one can also specify resource requirements, volumes to use, and more. Then, one can use this task to create a run task, which will run for a finite amount of time and then terminate, or a service, which will ensure that the task stays up and running indefinitely. Also, the ability exists to specify a specific instance count for a task, which is then spread out across the instance pool.

This is probably a good time to mention that ECS still does not provide abstraction across container infrastructure. There is a bit of automation to help with this, but ECS2 instances still need to be manually spun up and added to the ECS pool, from which it then derives the available amount of resources and how it distributes the containers. One would assume that there are plans to eliminate the visibility of the EC2 layer from the user – it seems like Amazon is aware of requests to do this, as was mentioned when I asked the question.

Ultimately, it looks like there is still some work to do to make ECS completely integrated. Auto-scaling is another feature that is still a manual process. For now, there are documented examples on how to do things like glue auto-scaling together using Lambda, for example.


You can see all – or at least most – of the presentations and breakouts that happened at the summit (and plenty of other conferences) – on the AWS YouTube page.

And as for me, expect to see, over the next several weeks, a product-by-product review of as much of AWS as I can eat. I will be starting with some of the products that were discussed above, with a mix of some of the old standbys in as well to ensure I cover things from the ground up. Watch this space!

Breaking Into the CentOS Cloud Image, and Modern Linux Password Recovery

In this day and age of cloud computing, installing an image from scratch is something that is probably not needed very often, if at all, and probably something that is only needed if installing a hardware machine. Major Linux vendors offer cloud versions of their images, such as Ubuntu and CentOS. Using these images with a compatible system ensures that one can get started up on a fresh Linux install very quickly, be it with a Public cloud like EC2, an OpenStack infrastructure, or even just a basic KVM host.

However, if it’s desired to use some of these images in a non-cloud setup, such as the latter scenario, there are some things that need to be done. I will be using the CentOS images as an example.

Step 1 – Resetting the Root Password

After the image has been downloaded and added into KVM, the root password needs to be reset.

This is actually a refresh of an age-old trick to get into Linux systems. Before, it was easy enough as adding init=/bin/bash to the end of the kernel parameter in GRUB, but times have changed a bit. This method actually still works, but just needs a couple of additions to get it to go. Read on below.

A note – SELinux

SELinux is enabled on the CentOS cloud images. The steps below include disabling it when the root password is reset. Make sure this is done, or you will have a bad time. Note that method #1 also includes enforcing=0 as a boot parameter, so if the this is missed, you have an opportunity to do in the current boot session before the system is rebooted.

Method #1 – “graceful” thru rd.break

This is the Red Hat supported method as per the manual.

rd.break stops the system just before the hand-off to the ramdisk. There are some situations where this can cause issues, but this is rare, and a cloud image is far from it.

Reboot the system, and abort the GRUB menu timeout by mashing arrow keys as soon as the boot prompt comes up. Then, select the default option (usually the top, the latest non-rescue kernel option) and press “e” to edit the parameters.

Make sure the only parameters on the linux or linux16 line are:

  • The path to the kernel (should be the first option, probably referencing the /boot directory)
  • The path or ID for the root filesystem (the root= option)
  • The ro option

Then, supply rd.break enforcing=0 option at the end. Press Ctrl-X to boot.

This will boot the system into an emergency shell that does not require a password, right before when the system would normally have handed it off to the ramdisk.

When the system is in the rescue state like this, the system is mounted on /sysroot. As such, a few extra steps are required to get the system mounted so that the password can be reset properly. Run:

mount -o remount,rw /sysroot
chroot /sysroot
passwd root
[change password here]
vi /etc/sysconfig/selinux
[set SELINUX=disabled]
mount -o remount,ro /sysroot

This load /sysroot into a chroot shell. The password will be prompted for on the passwd root line. Also, make sure to edit /etc/sysconfig/selinux and set SELINUX=disabled. After both of these are done, exit the shell, re-mount the filesystem read-only again to flush any writes, and exit the emergency shell. The system will either now reboot or just resume booting.

Method #2 – old-school init=/bin/bash

init=/bin/bash still works, funny enough, but there are some options that need to be removed on the CentOS system, as mentioned in method #1.

Reboot the system, and abort the GRUB menu timeout by mashing arrow keys as soon as the boot prompt comes up. Then, select the default option (usually the top, the latest non-rescue kernel option) and press “e” to edit the parameters.

Make sure the only parameters on the linux or linux16 line are:

  • The path to the kernel (should be the first option, probably referencing the /boot directory)
  • The path or ID for the root filesystem (the root= option)
  • The ro option

Then, supply the init=/bin/bash option at the end. Press Ctrl-X to boot.

After the initial boot the system is tossed into a root shell. Unlike method #1, this shell is already in the system, and / is the root, without a chroot being necessary. Simply run the following:

mount -o remount,rw /
passwd root
[change password here]
vi /etc/sysconfig/selinux
[set SELINUX=disabled]
mount -o remount,ro /

The password will be prompted on the passwd root command. Also, make sure to edit /etc/sysconfig/selinux and set SELINUX=disabled. After both of these are done, the filesystem should be remounted read-only to ensure that all writes are flushed. From here, simply reboot through a hard reset or ctrl-alt-del.

Last Few Steps

Now that the system can be rebooted and logged into, There are a few final steps:

Remove cloud-init

This is probably spamming the console right about now. Go ahead and disable it.

systemctl stop cloud-init.service
yum -y remove cloud-init

Enable password authentication

Edit /etc/ssh/sshd_config and change PasswordAuthentication to yes. Make sure the line that is not commented out is changed. Then restart SSH:

systemctl restart sshd.service

The cloud image should now be ready for general use.

Honorable Mention – Packer

All of this is not to say tools like Packer don’t have great merit in image creation, and in fact if you wanted to build a generic image for KVM, rather than just grabbing one like I mention above, there is a qemu builder that can do just that. Doing this will also ensure that the image lacks the cloud-init tools and what not that you may not need in your application.

Using X11 Forwarding Through sudo

X11 forwarding comes in handy from time to time. For example, say you want to use virt-manager to work with KVM VMs on your lab machine but want to do it from your Mac (ahem).

Yeah, if you didn’t gather it already, by “you”, I mean “me”.😉

The one main issue with this very specific above scenario is that virt-manager will more than likely require some sort of root-level privileges, and neither the ~/.Xauthority file, nor the DISPLAY or XAUTHORITY environment variables survive sudo escalation.

The manual fix is pretty easy though. Before escalation, run xauth list to get the session data.

The output looks like:

$ xauth list
localhost.localdomain/unix:99  MIT-MAGIC-COOKIE-1  aabbccddeeffgghh00112233445566

Take the second line (which is the session data). Then, after getting root (sudo su - works great), run xauth add with the session data:

xauth add localhost.localdomain/unix:99  MIT-MAGIC-COOKIE-1  aabbccddeeffgghh00112233445566

This will create the ~/.Xauthority file and aforementioned environment variables.

We are now able to run X11 apps as root!

Shout out to Backdrift for the source.