AWS Basics Using CloudFormation (Part 1) – Introduction to CloudFormation

This article is the first in many – as mentioned in the last article, I will be writing more articles over the course of the next several months on AWS, touching on as much of the service as I can get my hands on.

For my first article, I am starting with the basics – CloudFormation, Amazon VPC (Virtual Private Cloud), Elastic Load Balancing, and finally, EC2. The services that are covered here serve as some of the basic building blocks of an Amazon infrastructure, and some of the oldest components of AWS. This will serve as a entry point not only into further articles, but for myself, and you the reader, into learning more about AWS, and being more comfortable with the tools that manage it.

However, this article got so large that I have had to separate it into 3 parts! so, for the first article, I will be mainly covering CloudFormation, the second one will cover VPC, and the third one will cover ELB and EC2.

Viewing the Technical Demo

All of the items covered in this article have been assembled into a CloudFormation template that can be downloaded from the github page:

There is a README there that provides instructions on how to download and use the template.


I selected the first features of AWS to cover from a way that could give someone that is already familiar with the basic concepts of modern cloud computing and devops (which includes virtual infrastructure, automation, and configuration management) an idea of what that means when dealing with AWS and its products. Ultimately, this meant creating an example that would create a full running basic “application” that could be created and destroyed with a single command.

CloudFormation is Amazon’s primary orchestration product, and covers a wide range of services that make up the core of AWS’s infrastructure. It is used in this article to manage every service I touch – besides IAM and access keys, which are not covered here, nothing in this example has been set up through the AWS console. Given that the aforementioned two items have been set up, all that is necessary to create the example is a simple aws cloudformation CLI command.

Amazon VPC is the modern (and current) virtual datacenter platform that makes up the base of AWS. From a VPC, networks, gateways, access lists, and peer connections (such as VPN endpoints and more) are made to cover both the needs of a public-facing application and the private enterprise. It is pretty much impossible to have a conversation about AWS these days without using VPC.

Amazon EC2 is one of Amazon’s oldest and most important products. It is the solution that gave “the cloud” its name, and while Amazon has created a large number of platform services that have removed the need for EC2 in quite a few applications (indeed, one can run an entire application these days in AWS without a single EC2 instance), it is still highly relevant, and will continue to be so long as there is ever a need to run a server and not a service. Products such as VPC NAT instances (covered in part 2) and Amazon EC2 Container Service (not covered here) also use EC2 directly with no transparency, so its importance in the service are still directly visible to the user.

I put these three products together in this article – with CloudFormation, a VPC is created. This VPC has two subnets, a public subnet and a private subnet, along with a NAT instance, so that one can see some of the gotchas that can be encountered when setting up such infrastructure (and hopefully avoid some of the frustration that I experienced, mentioned in the appropriate section). An ELB is also created for two EC2 instances that will, upon creation, do some basic configuration to make themselves available over HTTP and serve up a simple static page that allows one to see both the ELB and EC2 instances in action.


CloudFormation is Amazon’s #1 infrastructure management service. With features that cover both deployment and configuration management, the service supports over two dozen AWS products, and can be extended to support external resources (and AWS processes not directly supported by CloudFormation) via custom resources.

One does not necessarily need to start off with CloudFormation completely from scratch. There are templates available at the AWS CloudFormation Templates page that have both examples of full stacks and individual snippets of various AWS services, which can be a great time saver in building custom templates.

The following few sections cover CloudFormation elements in further detail. It is a good idea to consult the general CloudFormation User Guide, which provides a supplemental to the information below, and also a good reference while designing templates, be it starting from scratch or using existing templates.

CloudFormation syntax synopsis

Most CloudFormation items (aside from the root items like template version and description) can be summarized as being an name/type pairing. Basically, given any certain type, be it parameters, resources, mappings, or anything else, items in CloudFormation generally are assigned a unique name, and then a type. Consider the following example parameter:

"Parameters": {
  "KeyPair": {
    "Type": "AWS::EC2::KeyPair::KeyName",
    "Description": "SSH key that will be used for EC2 instances (set up in web console)",
    "ConstraintDescription": "needs to be an existing EC2 keypair (set up in web console)"

This parameter is a AWS::EC2::KeyPair::KeyName parameter named KeyPair. The latter name can (and will be) referenced in resources, like the EC2 instance names (see the below section on EC2).

Look in the below sections for CloudFormation’s Ref function, which will be used several times; this function serves as the basis for referencing several kinds of CloudFormation elements, not just parameters.

Parameters and Outputs

Parameters are how data gets in to a CloudFormation template. This can be used to do things like get IDs of SSH keys to assign to instances (as shown above), or IP addresses to assign to security group ACLs. These are the two items parameters are used for in the example.

Outputs are how data gets out of CloudFormation. Data that is a useful candidate for being published through outputs include instance IP addresses, ELB host names, VPC IDs, and anything else that may be useful to a process outside of CloudFormation. This data can be read thru the UI, or through the JSON data produced by the aws cloudformation describe-stacks CLI command (and probably the API as well).

Parameter syntax

Let’s look at the other example in the CloudFormation template, the SSHAllowIPAddress parameter. This example uses more generic data types and gives a bigger picture as to what is possible with parameters. Note that there are several data types that can be used, which include both typical generic data types, such as Strings and Integers, and AWS-speciifc types such as the AWS::EC2::KeyPair::KeyName parameter used above.

"SSHAllowIPAddress": {
  "Type": "String",
  "AllowedPattern": "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\/32",
  "Description": "IP address to allow SSH from (only /32s allowed)",
  "ConstraintDescription": "needs to be in A.B.C.D/32 form"

This parameter is of type String, which also means that the AllowedPattern constraint can be used on it, which is used here to create a dotted quad regular expression, with the /32 netmask being explicitly enforced. JSON/Javascript syntax applies here, which explains the somewhat excessive nature of the backslashes.

Parameters are referenced using the Ref function. The snippet below gives an example of the SSHAllowIPAddress‘s reference:

"VCTSNatSecurityGroup": {
  "Type": "AWS::EC2::SecurityGroup",
  "Properties": {
    "SecurityGroupIngress": [
      { "IpProtocol": "tcp", "CidrIp": { "Ref": "SSHAllowIPAddress" }, "FromPort": "22", "ToPort": "22" }

Ref is a very simple function and usually just used to refer back to a CloudFormation element. It is not just restricted to parameters is used with both parameters, mappings, and resources. Further examples will be given below, so there should be a good idea on how to use it by the end of this article.

Output syntax

Below is the NatIPAddr output, pulled from the example.

"Outputs": {
  "NatIPAddr": {
    "Description": "IP address of the NAT instance (shell to this address)",
    "Value": { "Fn::GetAtt": [ "VCTSLabNatGw", "PublicIp" ] }

The nature of outputs are pretty simple. The data can be pulled any way that allows one to get the needed data. Most commonly, this will be from the Fn::GetAtt function, which can be used to get various attributes from resources, or possibly Ref, which in the event of resources, usually references a specific primary attribute.


Mappings allow a CloudFormation template some flexibility. The best example of this is allowing the CloudFormation template to be used in multiple regions, by mapping AMIs (instance images) to their respective regions.

Mapping syntax

This is the one in the reference’s template, and maps to Amazon Linux AMIs. These are chosen because they support cfn-init out of the box, which was going to be used in the CloudFormation template to run some commands via the AWS::CloudFormation::Init resource type in the EC2 section, but I opted to use user data instead (I cover this in further detail in part 3).

"Mappings": {
  "RegionMap": {
    "us-east-1": { "AMI": "ami-1ecae776" },
    "us-west-1": { "AMI": "ami-e7527ed7" },
    "us-west-2": { "AMI": "ami-d114f295" }
  "NatRegionMap": {
    "us-east-1": { "AMI": "ami-303b1458" },
    "us-west-1": { "AMI": "ami-7da94839" },
    "us-west-2": { "AMI": "ami-69ae8259" }

The above RegionMap is then referenced in EC2 instances like so:

"VCTSLabNatGw": {
  "Type": "AWS::EC2::Instance",
  "Properties": {
    "ImageId": { "Fn::FindInMap": [ "NatRegionMap", { "Ref": "AWS::Region" }, "AMI" ] },
    "InstanceType": "t2.micro",

This is one of many ways to use mappings, and more complex structures are possible. Check the documentation for further examples (such as how to expand the above map to make use of processor architecture the the region map).


Resources do the real work of CloudFormation. They create the specific elements of the stack and interface with the parameters, mappings, and outputs to do the work necessary to bring up the stack.

Since resources vary so greatly in what they need in real world examples, I explain each service that the template makes use of in their respective sections (ie: the VPC, ELB, and EC2 sections). However, some common elements are explained here in brief, as to give a primer on how they can be used to further control orchestration of the stack. Again, further detail on how to use these are shown as examples with the various AWS services explained below.

Creation Policies, Dependencies, and Metadata

A CreationPolicy can be used as a constraint to determine when a resource is counted as created. For example, this can be used with cfn-signal on an EC2 instance to ensure that the resource is not marked as CREATE_COMPLETE until all reasonable post-installation work has been done on an instance (for example, after all updates have been applied or certain software has been installed).

A dependency (defined with DependsOn) is a simple association to another resource that ties its creation with said parent. For example, the web server instances in the example do not start creation until the NAT instance is complete, as they are created in a private network and will not install properly unless they have internet access available to them.

Metadata can be used for a number of things. The example commonly explained is the use of the AWS::CloudFormation::Init metadata type to provide data to cfn-init, which is a simple configuration management tool. This is not covered in the example, as the work that is being done is simple enough to be done through UserData.

All of these 3 concepts are touched up on in further detail in part 3, when EC2 and the setup of an instance in CloudFormation is discussed.

Next Article – Amazon VPC

That about covers it for the CloudFormation part of this article. Stay tuned for the next part, in which I cover Amazon VPC basics, in addition to how it is set up in CloudFormation!

AWS NYC Summit 2015 Recap

When one gets the chance to go to New York, one takes it, as far as I’m concerned. So when PayByPhone offered to send me to the AWS NYC Summit, I totally jumped at the chance. In addition to getting to stand on top of two of the world’s tallest buildings, take a bike across the Brooklyn Bridge, and get some decent Times Square shots, I got to learn quite a bit about ops using AWS. Win-win!

The AWS NYC summit was short, but definitely one of the better conferences that I have been to. I did the Taking AWS Operations to the Next Level “boot camp” – a day-long training course – during the pre-day, and attended several of the breakouts the next day. All of them had great value and I took quite a few notes. I am going to try and abridge all of my takeaways on the products, “old” and new, that caught my eye below.


CloudFormation was a product that was covered in my pre-day course and also during one of the breakouts that I attended. It’s probably the most robust ops product that is offered on AWS today, supporting, from my impressions, the most products versus any other of the automation platform services that are offered.

The idea with CloudFormation, of course, is that infrastructure is codified in a JSON-based template, and then create “stacks” – entities that group infrastructure and platform services up in ways that can be duplicated, destroyed, or even updated with a sort of intelligent convergence, adding, removing, or changing resources depending on what has been defined in the template. Naturally, this can be integrated with any sort of source control so that changes are tracked, and a CI and deployment pipeline to further automate things.

One of the neat features that was mentioned in the CloudFormation breakout was the ability for CloudFormation to use Lambda-backed resources to interface with AWS features that CloudFormation does not have native support for, or even non-AWS products. All of this makes CloudFormation definitely seem like the go-to product if one is planning on using native tools to deploy AWS infrastructure.


OpsWorks is Amazon’s Chef product, although it’s a quite a bit more than that. It seems mainly like a hybrid of a deployment system like CloudFormation, with Chef being used to manage configuration through several points of the lifecycle. It uses chef-solo and chef-zero depending on the OS it is being employed for (Linux is chef-solo and Chef 11, and Windows is Chef 12 and chef-zero), and since it is all run locally, there is no Chef server.

In OpsWorks, an application stack is deployed using components called Layers. Layers exist for load balancing, application servers and databases, in addition to others such as caching and even custom ones that can utilize functionality that are created through Chef cookbooks. With support for even some basic monitoring, one can probably write an entire application in OpsWorks without even touching another AWS ops tool.

AWS API Gateway

A few new products were announced at the summit – but API Gateway was the one killer app that caught my eye. Ultimately this means that developers do not really need to mess around with frameworks any more to get an API, or even get a web application off the ground – just hook in the endpoints with API gateway, integrate them with Lambda, and it’s done. With the way that AWS’s platform portfolio is looking these days, I’m surprised that this one was so late the party!

CodeDeploy, CodePipeLine, and CodeCommit

These were presented to me in a breakout that gave a bit of a timeline on how Amazon internally developed their own deployment pipelines. Ultimately they segued into these three tools.

CodeDeploy is designed to deploy an application to not only AWS, but also to on-premise resources, and even other cloud providers. The configuration is YAML-based and pretty easy to read. As part of its deployment feature set, it does offer some orchestration facilities, so there is some overlap with some of its other tools. It also integrates with several kinds of source control platforms (ie: GitHub), other CI tools (ie: Jenkins), and configuration management systems. The killer features for me is its support for rolling updates, to automate the deployment of a new infrastructure while draining out the old one.

CodePipeline is a release modeling and workflow engine, and can be used to model a release process, working with CodeDeploy, to automate the deployment of an application from source, to testing, and then to production. Tests can be automated using services like RunScope or Ghost Inspector, to name just a couple. It definitely seems like these two – CodePipeline and CodeDeploy – are naturally coupled to give a very smooth software deployment pipeline – on AWS especially.

CodeCommit is AWS’s foray into source control, a la GitHub, Bitbucket, etc. Aside all the general things that one would hopefully expect from a hosted source control service (ie: at-rest encryption and high availability), expect a few extra AWS-ish perks, like the ability to use existing IAM scheme to assign ACLs to repositories. Unlimited storage per repository was mentioned, or at least implied, but there does appear to be some metering – see here for pricing.

EC2 Container Service (ECS)

The last breakout I checked out was one on EC2 Container Service (ECS). This is AWS’s Docker integration. The breakout itself spent a bit of time on an intro to containers themselves which is out of the scope of this article, but is a topic I may touch up on at a later time (Docker is on “the list” of things I want to evaluate and write on). Docker is a great concept. It rolls configuration management and containerization into one tool for deploying applications and gives a huge return on infrastructure in the form of speed and density. The one unanswered question has been clustering, but there has been several 3rd party solutions for that for a bit and Docker themselves has a solution that is still fairly new.

ECS does not appear to be Swarm, but its own mechanism. But in addition to clustering, ECS works with other AWS services, such as Elastic Load Balancing (ELB), Elastic Block Storage (EBS), and back office things like IAM and CloudTrail. Templates are rolled into entities called tasks, where one can also specify resource requirements, volumes to use, and more. Then, one can use this task to create a run task, which will run for a finite amount of time and then terminate, or a service, which will ensure that the task stays up and running indefinitely. Also, the ability exists to specify a specific instance count for a task, which is then spread out across the instance pool.

This is probably a good time to mention that ECS still does not provide abstraction across container infrastructure. There is a bit of automation to help with this, but ECS2 instances still need to be manually spun up and added to the ECS pool, from which it then derives the available amount of resources and how it distributes the containers. One would assume that there are plans to eliminate the visibility of the EC2 layer from the user – it seems like Amazon is aware of requests to do this, as was mentioned when I asked the question.

Ultimately, it looks like there is still some work to do to make ECS completely integrated. Auto-scaling is another feature that is still a manual process. For now, there are documented examples on how to do things like glue auto-scaling together using Lambda, for example.


You can see all – or at least most – of the presentations and breakouts that happened at the summit (and plenty of other conferences) – on the AWS YouTube page.

And as for me, expect to see, over the next several weeks, a product-by-product review of as much of AWS as I can eat. I will be starting with some of the products that were discussed above, with a mix of some of the old standbys in as well to ensure I cover things from the ground up. Watch this space!

Breaking Into the CentOS Cloud Image, and Modern Linux Password Recovery

In this day and age of cloud computing, installing an image from scratch is something that is probably not needed very often, if at all, and probably something that is only needed if installing a hardware machine. Major Linux vendors offer cloud versions of their images, such as Ubuntu and CentOS. Using these images with a compatible system ensures that one can get started up on a fresh Linux install very quickly, be it with a Public cloud like EC2, an OpenStack infrastructure, or even just a basic KVM host.

However, if it’s desired to use some of these images in a non-cloud setup, such as the latter scenario, there are some things that need to be done. I will be using the CentOS images as an example.

Step 1 – Resetting the Root Password

After the image has been downloaded and added into KVM, the root password needs to be reset.

This is actually a refresh of an age-old trick to get into Linux systems. Before, it was easy enough as adding init=/bin/bash to the end of the kernel parameter in GRUB, but times have changed a bit. This method actually still works, but just needs a couple of additions to get it to go. Read on below.

A note – SELinux

SELinux is enabled on the CentOS cloud images. The steps below include disabling it when the root password is reset. Make sure this is done, or you will have a bad time. Note that method #1 also includes enforcing=0 as a boot parameter, so if the this is missed, you have an opportunity to do in the current boot session before the system is rebooted.

Method #1 – “graceful” thru rd.break

This is the Red Hat supported method as per the manual.

rd.break stops the system just before the hand-off to the ramdisk. There are some situations where this can cause issues, but this is rare, and a cloud image is far from it.

Reboot the system, and abort the GRUB menu timeout by mashing arrow keys as soon as the boot prompt comes up. Then, select the default option (usually the top, the latest non-rescue kernel option) and press “e” to edit the parameters.

Make sure the only parameters on the linux or linux16 line are:

  • The path to the kernel (should be the first option, probably referencing the /boot directory)
  • The path or ID for the root filesystem (the root= option)
  • The ro option

Then, supply rd.break enforcing=0 option at the end. Press Ctrl-X to boot.

This will boot the system into an emergency shell that does not require a password, right before when the system would normally have handed it off to the ramdisk.

When the system is in the rescue state like this, the system is mounted on /sysroot. As such, a few extra steps are required to get the system mounted so that the password can be reset properly. Run:

mount -o remount,rw /sysroot
chroot /sysroot
passwd root
[change password here]
vi /etc/sysconfig/selinux
[set SELINUX=disabled]
mount -o remount,ro /sysroot

This load /sysroot into a chroot shell. The password will be prompted for on the passwd root line. Also, make sure to edit /etc/sysconfig/selinux and set SELINUX=disabled. After both of these are done, exit the shell, re-mount the filesystem read-only again to flush any writes, and exit the emergency shell. The system will either now reboot or just resume booting.

Method #2 – old-school init=/bin/bash

init=/bin/bash still works, funny enough, but there are some options that need to be removed on the CentOS system, as mentioned in method #1.

Reboot the system, and abort the GRUB menu timeout by mashing arrow keys as soon as the boot prompt comes up. Then, select the default option (usually the top, the latest non-rescue kernel option) and press “e” to edit the parameters.

Make sure the only parameters on the linux or linux16 line are:

  • The path to the kernel (should be the first option, probably referencing the /boot directory)
  • The path or ID for the root filesystem (the root= option)
  • The ro option

Then, supply the init=/bin/bash option at the end. Press Ctrl-X to boot.

After the initial boot the system is tossed into a root shell. Unlike method #1, this shell is already in the system, and / is the root, without a chroot being necessary. Simply run the following:

mount -o remount,rw /
passwd root
[change password here]
vi /etc/sysconfig/selinux
[set SELINUX=disabled]
mount -o remount,ro /

The password will be prompted on the passwd root command. Also, make sure to edit /etc/sysconfig/selinux and set SELINUX=disabled. After both of these are done, the filesystem should be remounted read-only to ensure that all writes are flushed. From here, simply reboot through a hard reset or ctrl-alt-del.

Last Few Steps

Now that the system can be rebooted and logged into, There are a few final steps:

Remove cloud-init

This is probably spamming the console right about now. Go ahead and disable it.

systemctl stop cloud-init.service
yum -y remove cloud-init

Enable password authentication

Edit /etc/ssh/sshd_config and change PasswordAuthentication to yes. Make sure the line that is not commented out is changed. Then restart SSH:

systemctl restart sshd.service

The cloud image should now be ready for general use.

Honorable Mention – Packer

All of this is not to say tools like Packer don’t have great merit in image creation, and in fact if you wanted to build a generic image for KVM, rather than just grabbing one like I mention above, there is a qemu builder that can do just that. Doing this will also ensure that the image lacks the cloud-init tools and what not that you may not need in your application.

Using X11 Forwarding Through sudo

X11 forwarding comes in handy from time to time. For example, say you want to use virt-manager to work with KVM VMs on your lab machine but want to do it from your Mac (ahem).

Yeah, if you didn’t gather it already, by “you”, I mean “me”. 😉

The one main issue with this very specific above scenario is that virt-manager will more than likely require some sort of root-level privileges, and neither the ~/.Xauthority file, nor the DISPLAY or XAUTHORITY environment variables survive sudo escalation.

The manual fix is pretty easy though. Before escalation, run xauth list to get the session data.

The output looks like:

$ xauth list
localhost.localdomain/unix:99  MIT-MAGIC-COOKIE-1  aabbccddeeffgghh00112233445566

Take the second line (which is the session data). Then, after getting root (sudo su - works great), run xauth add with the session data:

xauth add localhost.localdomain/unix:99  MIT-MAGIC-COOKIE-1  aabbccddeeffgghh00112233445566

This will create the ~/.Xauthority file and aforementioned environment variables.

We are now able to run X11 apps as root!

Shout out to Backdrift for the source.

nginx: More FastCGI Stuff

nginx’s a pretty powerful server, no doubt. But depending on your situation, a simple caching solution as discussed in my FastCGI Caching Basics article might not be enough, or may outright break your setup. You may also want nginx to co-exist with any existing application cache as well.

In this article, I’ll explain a few techniques that can be used to help alter nginx’s caching and proxy behaviour to get the configuration you need. 

Advanced use of fastcgi_param

fastcgi_param is an important part of setting up the environment for proper operation. It can also be used to manipulate the HTTP request as it is being passed to the backend, performing an ad-hoc mangling of the request.

Example: clearing the Accept-Encoding request header

fastcgi_param HTTP_ACCEPT_ENCODING "";

This is useful when an application is configured to send a response deflated, ie: with gzip compression. This is something that should be handled in nginx, not the backend, and caching a gzipped response with a key that may cause it to be sent to a client that does not support compression will obviously cause problems. Of course, this should probably be turned off in the application, but there may be situations where that is not possible. This effectively clears whatever was set here in the first place, possibly by something earlier in the config. 

Ignoring and Excluding Headers

Via fastcgi_ignore_headers and fastcgi_hide_header, one can manipulate the headers passed to the client and even control nginx’s behaviour itself. 

Example: Ignoring Cache-Control

Imagine a scenario where the application you are using has a caching feature that can be used to compliment nginx’s own cache by speeding up updates, further reducing load. However, this feature sends Cache-Control and Expires headers. nginx honours these headers, which may manipulate nginx’s cache in a way you do not want. 

This can be corrected via:

fastcgi_ignore_headers Cache-Control Expires;
fastcgi_hide_header Cache-Control;
fastcgi_hide_header Expires;

This does the following:

  • The fastcgi_ignore_headers line will ensure that nginx will not honor any data in the Cache-Control and Expires header. Hence, if caching is enabled, nginx will cache the page even if there is a Cache-Control: no-cache header. Additionally, Expires from the response is ignored as well.
  • However, with this on, the response will still be passed to the client with the bad headers. That is probably not what is desired, so fastcgi_hide_header is used to ensure that those headers are not passed to the client as well.

Gotcha #1: Ensuring an Admin Area is not Cached

If the above techniques are employed to further optimize the cache, then it is probably a good idea to ensure that the admin area is not cached, explicitly, if Cache-Control was relied on to do that job before.

Below is an example on how to structure that in a config file, employing all of the other items that I have discussed already. Note that static entry of the SCRIPT_FILENAME and SCRIPT_NAME is probably necessary for the admin area, as the dynamic location block for general PHP files will have the caching credentials in it.

# fastcgi for admin area (no cache)
location ~ ^/admin.* {
  include fastcgi_params;

  # Clear the Accept-Encoding header to ensure response is not zipped
  fastcgi_param HTTP_ACCEPT_ENCODING "";

  fastcgi_pass unix:/var/run/php5-fpm.sock;
  fastcgi_index index.php;
  fastcgi_param SCRIPT_FILENAME $document_root/index.php;
  fastcgi_param SCRIPT_NAME /index.php;

location / {
  try_files $uri $uri/ /index.php$is_args$args;

# Standard fastcgi (caching)
location ~ \.php$ {
  include fastcgi_params;

  # Clear the Accept-Encoding header to ensure response is not zipped
  fastcgi_param HTTP_ACCEPT_ENCODING "";

  fastcgi_ignore_headers Cache-Control Expires;
  fastcgi_hide_header Cache-Control;
  fastcgi_hide_header Expires;

  fastcgi_pass unix:/var/run/php5-fpm.sock;
  fastcgi_index index.php;
  fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;

  fastcgi_cache fcgi_one;
  fastcgi_cache_valid 200 1m;
  fastcgi_cache_key $request_method-$http_host$fastcgi_script_name$request_uri;
  fastcgi_cache_lock on;
  fastcgi_cache_use_stale error timeout invalid_header updating http_500 http_503;

This setup will route all requests for /admin to FastCGI with a statically defined page and no cache options. This will ensure that no caching and header manipulation happens, while regular FastCGI requests will be sent to the optimized block with the caching and header bypass features.

Note the fastcgi_cache_key. This is the last thing I want to talk about:

Gotcha #2: Caching and HEAD Requests

nginx’s fastcgi_cache_key option does not have a default value, or at least the docs do not give one. The example may also be insufficient for everyday needs, or at least I thought it was. However, the proxy_cache_key has the following default value:


This was pretty close to what I needed initially, and I just altered it to $http_host$fastcgi_script_name$request_uri. However, when I put this config into production, we started seeing the occasional blank page, and our monitors were reporting timeouts. It took quite a bit of digging to find out that some of our cache entires were blank, and a little googling later ended up revealing the obvious, which was confirmed in logs: HEAD requests were coming thru when there was no cache data, and nginx was caching this to a key that could not discern between a HEAD and a GET request.

There are a couple of options here: use fastcgi_cache_methods to restrict caching to only GET requests, or add the request method into the cache key. I chose the latter, as HEAD requests in my instance caused just as much load as GET requests, and I would rather not run into a situation where we encounter load issues because the servers are hammered with HEAD requests.

The new cache key is below:

fastcgi_cache_key $request_method-$http_host$fastcgi_script_name$request_uri;

iTerm2 Tweaks for Fun and Privacy

For any sort of system administrator working with Linux systems, a good terminal emulator is an essential part of the toolbox.

Using – the terminal that comes with OS X for default – has started to wear on me, due to the lack of shared pasteboard (leading to more mispastes this year than I have probably had in the last decade), and the lack of basic reconfiguration of certain keys like the tab macros (although these can be changed using system preferences).

Recently I have been using iTerm2. It’s extremely powerful, but some of its options needed a bit of tweaking to get the privacy to a level that I’m more comfortable with. I have also made a couple of other look and feel tweaks that are worth noting as well.

Privacy Tweaks

These tweaks can help secure iTerm2 a bit, especially if these options are not used often.

Disable pasteboard history

Controlling the number of items in the pasteboard history is a hidden option documented here.

This can be used to turn off the history altogether. Run this, and restart iTerm, and now pressing Command-Shift-H will return an empty table no matter what:

defaults write com.googlecode.iterm2 MaxPasteHistoryEntries -int 0

Disable VT100 printer code handling

I don’t know why this is not a default, but it can be can turned off easy enough.

Head to Preferences -> Profiles, select the profile to update (ie: the default one), then select Terminal and ensure Disable session-initiated printing is enabled.

Disable semantic history

Semantic history is a feature that facilitates a kind of Finder-like behaviour in iTerm. By Command-Click-ing something, that file can be loaded like it was run through Finder. It could possibly be useful, but I can see myself fat-fingering something I shouldn’t have and then having to deal with the consequences.

I haven’t found a way to disable the functionality completely, but the aforementioned macro can be disabled by going to Preferences -> Pointer and ensuring Command-Click Opens Filename/URL is not checked.

Disable application clipboard control (default)

This is thankfully a default, but it might be worth double-checking: go to Preferences -> General and ensure that Allow clipboard access to terminal apps is disabled. This ensures that the proprietary iTerm clipboard control code is not handled. See here for a full list of other iTerm-specific codes.

Fun Tweaks

These tweaks will generally improve the experience.

Keybinds (Global)

Head to Preferences -> Keys. These are the global keybinds.

Here, keys can be bound for general application-use macros. I like Control-PgUp/PgDn for Previous Tab/Next Tab respective as it’s a pretty well-accepted standard. Also, Control-Shift-T for new tab. This one needs to be selected as Select Menu Item… in the Action combo box, after which New Tab will be selectable.

Note that terminal-use keybinds can be edited on a per-profile basis in Preferences -> Profiles -> Keys.

Unlimited scrollback

Head over to Preferences -> Profiles, select the profile to update (ie: the default one), then select Terminal and ensure Unlimited scrollback is enabled. This limits the scrollback buffer to available memory.


No terminal is complete without a transparency feature. 😉 Head over to Preferences -> Profiles, select the profile to update (ie: the default one), and select Window. The transparency slider is there. The results show immediately, so it’s easy to see how much is needed.


I’m going to get tired of copying/pasting this text, honest. 😉

Head over to Preferences -> Profiles, select the profile to update (ie: the default one), and select Text. Fonts can be selected there.

For even more awesome fonts, check out Beautiful fixed-width fonts for OS X. This page has a tarball of the various misc-fixed fonts that are default for X11, including terminals like xterm. This is really great for that classic look-and-feel.


Last one. For now. 😉

Head over to Preferences -> Profiles, select the profile to update (ie: the default one), and select Colors.

Colours can be changed here, but even better, they can be saved as schemes. There is even a gallery with a pretty impressive collection of schemes to choose from.

Honorable Mention – X11

For terminal alternatives, this is worth mentioning. There are a few options to install X11 on a Mac, it can be downloaded here or through MacPorts. MacPorts can be used to install other terminals as well, such as rxvt-unicode or mrxvt, two extremely fast terminals that can be pretty well customized in their own right. The latter does not have unicode support, but is a personal favourite of mine, and if it wasn’t a bit of a pain to have to adjust locales on all the systems I may touch, I would probably be using it.

There are a few things to note if when using X11 terminals on Mac:

Launch proper terminal on X11 start

The that MacPorts installs is actually simply a wrapper to xterm (and by proxy, startx). The following command below will change the startup app to mrxvt

defaults write org.macports.X11 app_to_run /opt/local/bin/mrxvt

Enabling pasteboard text selection update

The gremlin of a split clipboard rears its head again. 😉

Luckily, this time it’s fixable. Simply open X11’s preferences, and select Pasteboard -> Update Pasteboard immediately when new text is selected.

nginx: FastCGI Caching Basics

I’m back!

Today, I am going to share some things regarding how to do caching in nginx, with a bit of a write up and history first.

The LAMP Stack

An older acronym these days, LAMP stands for:

  • (L)inux,
  • (A)pache,
  • (M)ySQL,
  • (P)HP.

I really am not too sure when the term was coined, but it definitely was a long time, probably about 15 years ago. The age definitely shows: there are definitely plenty of alternatives to this stack now, giving way to several other acronyms which I am not going to attempt to catalog. The only technology that has remained constant in this evolution has been the operating system: Linux.

MySQL (and Postgres, for that matter) have seen less use in favour of alternatives after people found out that not everything is best suited to go into a relational database. PHP has plenty of other alternatives, be it Node, Ruby, Python, or others, all of which have their own middleware to facilitate working with the web.

Apache can serve the aforementioned purpose, but really is not necessarily well-fated for the task. That’s not to say it can’t be. Apache is extremely well featured, a product of it being one of the oldest actively developed HTTP servers currently available, and can definitely act as a gateway for several of the software systems mentioned above. It is still probably the most popular web server on the internet, serving a little over 50% of the web’s content.

Apache’s Dated Performance

However, as far as performance goes, Apache has not been a contender for a while now. More minimal alternatives, such as the subject of this article, nginx, offer fewer features, but much better performance. Some numbers put nginx at around twice the speed – or faster – of some Apache MPMs, even on current versions. Out of the box, I recently clocked the memory footprint of a nginx and PHP-FPM stack at roughly half of the memory footprint of an Apache and mod_php5 server, a configuration that is still in popular use, mainly due to the issues the PHP project has historically had with threading.

Gateway vs. Middleware

PHP running as a CGI has always had some advantages: from a hosting background, it allows hosters to ensure that scripts and software get executed with a segregated set of privileges, usually the owner of the site. The big benefit to this was that any security problems with that in particular site didn’t leak over to other sites.

Due to the lack of threading, this is where PHP has gotten most of the love. Aside from FastCGI, there are a couple of other popular, high-performance options to use for running PHP middleware:

  • PHP-FPM, which is mainline in PHP
  • HipHopVM, Facebook’s next generation PHP JIT VM, that supports both PHP and Facebook’s own Hack derivative.

These of course, connect to a webserver, and when all the webserver is doing now is serving static content and directing connections, the best course of action is to pick a lightweight server, such as nginx.

Dynamic Language for Static Content?

Regardless of the option chosen, one annoying fact may always remain – the fact that there is a very good chance that the content being served by PHP is ultimately static during a very large majority of its lifetime. A great example of this is a CMS system, such as WordPress, running a site that may see little to no regular updates. In this event, the constant re-generation of content will place unnecessary load on a system, wasting system resources and increasing page load times.

The CMS in use may have caching options, which may be useful in varying capacities. Depending on how they run their cache, however, this could still mean unnecessary CPU used to run PHP logic, or database traffic if the cache is stored in the database.

Enter nginx’s Caching

nginx has some very powerful options for serving as a proxy server, and is perfectly capable of running as a layer 7 load balancer, complete with caching. The latter is what I am covering in this article.

nginx has 2 specific caching modules: the cache options stored in ngx_http_proxy_module and ngx_http_fastcgi_module. These control their respective areas: proxy_cache_* options are used in conjunction with standard requests and proxy options, and fastcgi_cache_* options are used with the FastCGI options (locations generally used with fastcgi_pass proxied namespace).

Setting up the Middleware

I am not covering setting up the middleware in this article, but it is very easy to get started with PHP-FPM. Usually, installing it is as easy as installing it through the respective distro (ie: apt-get install php5-fpm in modern versions of Debian or Ubuntu).

Ubuntu 14.04 sets up PHP-FPM to listen on /var/run/php5-fpm.sock, but it can, of course, be configured to listen on TCP as well.

Setting up nginx for FastCGI

Before jumping into the config below, keep in mind that the FastCGI cache needs to be defined in the core nginx http config, like so:

http {
  # Several omitted options here...
  fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=fcgizone:10m max_size=200m;

This option dictates several things:

  • The path of the cache, in this case /var/cache/nginx
  • The structure of the path hierarchy. 1:2 constructs a directory structure that takes the last three characters of the MD5 hash computed file name and creates a structure like /var/cache/nginx/c/ab/ if your last 3 characters were abc.
  • keys_zone is the name and the size of the key cache. This is not the actual cache size, but memory used for cache key entries and metadata. One megabyte can hold approximately 8000 keys or cache entries.
  • max_size is what defines the size of the cache.

This can also be included and dropped into /etc/nginx/conf.d/ in most default setups.

The following details a basic server section in nginx. This will lie in somewhere like /etc/nginx/conf.d/, or /etc/nginx/sites-available/ if Debian or Ubuntu convention is being followed.

server {
  listen 80;

  root /var/www/;
  index index.php index.html;

  server_name _;

  # Logs
  access_log /var/log/nginx/;
  error_log /var/log/nginx/;

  location / {
          try_files $uri $uri/ /index.php$is_args$args;

  # FastCGI stuff
  location ~ \.php$ {
          include fastcgi_params;

          fastcgi_pass unix:/var/run/php5-fpm.sock;
          fastcgi_index index.php;
          fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;

          fastcgi_cache fcgizone;
          fastcgi_cache_valid 200 1m;
          fastcgi_cache_key $http_host$fastcgi_script_name$request_uri;
          fastcgi_cache_lock on;
          fastcgi_cache_use_stale error timeout invalid_header updating http_500 http_503;
  # deny access to .htaccess files, if Apache's document root
  # concurs with nginx's one
  location ~ /\.ht {
          deny all;

A note before I move on to the FastCGI stuff: the location block is set up to try a few options before 404ing: the direct URI, the URI as a sub directory (ie: to see if there is a default file here), and then, as a fallback, to request /index.php itself. This is mainly designed for sites that have a CMS system that uses permalinks (again, like WordPress).

Now, on to the FastCGI bits in the location block (which, if it wasn’t evident, passes all PHP content):

  • First off, /etc/nginx/fastcgi_params is included (shorthanded to a relative path). This file sets a number of environment variables that are essential to a functional FastCGI environment, and is included with most bundled versions of nginx.
  • fastcgi_pass is the option that passes the request to – in this case – PHP-FPM.
  • fastcgi_param as it’s shown here overrides an option that was set in fastcgi_params, and serves as an example of how to set environment. Basically I am building the SCRIPT_FILENAME environment variable for FastCGI by combining the document root and the path to the running script (evaluating to something like /var/www/ This is needed by some CMS systems.

Now, on to the main attraction:

  • fastcgi_cache references the cache zone that was defined earlier. This effectively turns on the cache, and is in theory the only option needed.
  • fastcgi_cache_valid sets cache entries for 200 (OK) code responses for 1 minute, in this instance.
  • fastcgi_cache_key is building a cache key. The object is to get a unique enough key generated here so that there are no cache conflicts that could lead to broken content. The one listed here gives one such as, which should be plenty. Of course, cache entries can be built off a number of things, including the many list of variables that nginx has.
  • fastcgi_cache_lock ensures that only one request for a new cache entry is sent at the same time. This has a default timeout of 5 seconds by default (controlled with fastcgi_cache_lock_timeout), after which the request will be passed through to avoid errors. However:
  • fastcgi_cache_use_stale, the last option, has a option named updating that allows a stale entry to be passed to any other requests in the case that there is currently a lock. This enables a simple yet effective throttling mechanism to back end resources. In this configuration, approximately one request per URL would come through every minute. There are also other flags here that allow stale entries to be used in the event of several kinds of errors. Depending on how the application is set up, your mileage may vary.

Lastly, not a caching option, but I block .htaccess files just in case there are any left over in the content since moving from Apache, if that change was made.

This above is really the tip of the iceberg when it comes to nginx caching. There are several other cache manipulation options, allowing for finer grained cache control, such as fastcgi_cache_bypass to bypass the cache (ie: honouring Cache-Control headers inbound or manually exempting admin areas), or even more sophisticated scenarios such as setting up the cache to be purged or revalidated via special requests. Definitely take a look at the documentation mentioned above if you are interested. Keep in mind that some require later versions of nginx (the one bundled in Ubuntu 14.04 for example is only 1.4.6), and cache purging actually requires nginx+, nginx’s premium server.

One last thing about the cache that should be noted: Cache-Control response headers from FastCGI are honoured. This means that if the application in question has an admin area that passes these headers, it is not necessary to set up any exceptions using fastcgi_cache_bypass.