Note: This article originally appeared in the 2016 AWS Advent. For the original article, click here.
Note that this article was written for Terraform v0.7.x – there have been several developments since this release that makes a number of the items covered here obsolete and they will be covered in the next article. 🙂
This article is going to show you how you can use Terraform, with a little help from Packer and Chef, to deploy a fully-functional sample web application, complete with auto-scaling and load balancing, in under 50 lines of Terraform code.
You will need the sample project to follow along, so make sure you load that up before continuing with reading this article.
The Humble Configuration
Check out the code in the terraform/main.tf
file.
It might be hard to think that with this mere smattering of Terraform is setting up:
- An AWS VPC
- 2 subnets, each in different availability zones, fully routed
- An AWS Application Load Balancer
- A listener for the ALB
- An AWS Auto Scaling group
- An ALB target group attached to the ALB
- Configured security groups for both the ALB and backend instances
So what’s the secret?
Terraform Modules
This example is using a powerful feature of Terraform – the modules feature, providing a semantic and repeatable way to manage AWS infrastructure. The modules hide most of the complexity of setting up a full VPC behind a relatively small set of code, and an even smaller set of changes going forward (generally, to update this application, all that is needed is to update the AMI).
Note that this example is composed entirely of modules – no root module resources exist. That’s not to say that they can’t exist – and in fact one of the secondary examples demonstrates how you can use the outputs of one of the modules to add extra resources on an as-needed basis.
The example is composed of three visible modules, and one module that operates under the hood as a dependency
terraform_aws_vpc
, which sets up the VPC and subnetsterraform_aws_alb
, which sets up the ALB and listenerterraform_aws_asg
, which configures the Auto Scaling group, and ALB target group for the launched instancesterraform_aws_security_group
, which is used by the ALB and Auto Scaling modules to set up security groups to restrict traffic flow.
These modules will be explained in detail later in the article.
How Terraform Modules Work
Terraform modules work very similar to basic Terraform configuration. In fact, each Terraform module is a standalone configuration in its own right, and depending on its pre-requisites, can run completely on its own. In fact, a top-level Terraform configuration without any modules being used is still a module – the root
module. You sometimes see this mentioned in various parts of the Terraform workflow, such as in things like error messages, and the state file.
Module Sources and Versioning
Terraform supports a wide variety of remote sources for modules, such as simple, generic locations like HTTP, or Git, or well-known locations like GitHub, Bitbucket, or Amazon S3.
You don’t even need to put a module in a remote location. In fact, a good habit to get into is if you need to re-use Terraform code in a local project, put that code in a module – that way you can re-use it several times to create the same kind of resources in either the same, or even better, different, environments.
Declaring a module is simple. Let’s look at the VPC module from the example:
module "vpc" { source = "github.com/paybyphone/terraform_aws_vpc?ref=v0.1.0" vpc_network_address = "${var.vpc_network_address}" public_subnet_addresses = ["${var.public_subnet_addresses}"] project_path = "${var.project_path}" }
The location of the module is specified with the source
parameter. The style of the parameter will dictate what kind of behaviour TF will undertake to get the module.
The rest of the options here are module parameters, which translate to variables within the module. Note that any variable that does not have a default
value in the module is a required parameter, and Terraform will not start if these are not supplied.
The last item that should be mentioned is regarding versioning. Most module sources that work off of source control have a versioning parameter you can supply to get a revision or tag – with Git and GitHub sources, this is ref
, which can translate to most Git references, be it a branch, or tag.
Versioning is a great way to keep things under control. You might find yourself iterating very fast on certain modules as you learn more about Terraform or your internal infrastructure design patterns change – versioning your modules ensures that you don’t need to constantly refactor otherwise stable stacks.
Module Tips and Tricks
Terraform and HCL is a work in progress, and there may be some things that seem like they may make sense that don’t necessarily work 100% – yet. There are some things that you might want to keep in mind when you are designing your modules that may reduce the complexity that ultimately gets presented to the user:
Use Data Sources
Terraform 0.7+’s data sources feature can go a long way in reducing the amount of data needs to go in to your module.
In this project, data sources are used for things such as obtaining VPC IDs from subnets (aws_subnet
) and getting the security groups assigned to an ALB (using the aws_alb_listener
and aws_alb
data sources chained together). This allows us to create ALBs based off of subnet ID alone, and attach auto-scaling groups to ALBs with knowing only the listener ARN that we need to attach to.
Exploit Zero Values and Defaults
Terraform follows the rules of the language it was created in regarding zero values. Hence, most of the time, supplying an empty parameter is the same as supplying none at all.
This can be advantageous when designing a module to support different kinds of scenarios. For example, the alb
module supports TLS via supplying a certificate ARN. Here is the variable declaration:
// The ARN of the server certificate you want to use with the listener. // Required for HTTPS listeners. variable "listener_certificate_arn" { type = "string" default = "" }
And here it is referenced in the listener block:
// alb_listener creates the listener that is then attached to the ALB supplied // by the alb resource. resource "aws_alb_listener" "alb_listener" { ... certificate_arn = "${var.listener_certificate_arn}" ... }
Now, when this module parameter is not supplied, its default value becomes an empty string, which is passed in to aws_alb_listener.alb_listener
. This is, most times, exactly the same as if the parameter is not passed in at all. This allows you to not have to worry about this parameter when you just want to use HTTP on this endpoint (the default for the ALB module as a whole).
Pseudo-Conditional Logic
Terraform does not support conditional logic yet, but through creative use of count
and interpolation, one can create semi-conditional logic in your resources.
Consider the fact that the terraform_aws_autoscaling
module supports the ability to attach the ASG to an ALB, but does not explicit require it. How can you get away with that, though?
To get the answer, check one of the ALB resources in the module:
// autoscaling_alb_target_group creates the ALB target group. resource "aws_alb_target_group" "autoscaling_alb_target_group" { count = "${lookup(map("true", "1"), var.enable_alb, "0")}" ... }
Here, we make use of the map
interpolation function, nested in a lookup
function to provide essentially an if/then/else control structure. This is used to control a resource’s instance count
, adding an instance if var.enable_alb
is true, and completely removing the resource from the graph otherwise.
This conditional logic does not necessarily need to be limited to count
either. Let’s go back to the aws_alb_listener.alb_listener
resource in the ALB module, looking at a different parameter:
// alb_listener creates the listener that is then attached to the ALB supplied // by the alb resource. resource "aws_alb_listener" "alb_listener" { ... ssl_policy = "${lookup(map("HTTP", ""), var.listener_protocol, "ELBSecurityPolicy-2015-05")}" ... }
Here, we are using this trick to supply the correct SSL policy to the listener if the listener protocol is not HTTP. If it is, we supply the zero value, which as mentioned before, makes it as if the value was never supplied.
Module Limitations
Terraform does have some not-necessarily-obvious limitations that you will want to keep in mind when designing both modules and Terraform code in general. Here are a couple:
Count Cannot be Computed
This is a big one that can really get you when you are writing modules. Consider the following scenario that totally did not happen to me even though I knew of of such things beforehand 😉
- An ALB listener is created with
aws_alb_listener
- The
arn
of this resource is passed as an output - That output is used as both the ARN to attach an auto-scaling group to, and the pseudo-conditional in the ALB-related resources’
count
parameter
What happens? You get this lovely message:
value of 'count' cannot be computed
Actually, it used to be worse (a strconv
error was displayed instead), but luckily that changed recently.
Unfortunately, there is no nice way to work around this right now. Extra parameters need to be supplied or you need to structure your modules in way that avoids computed values being passed into count
directives in your workflow. (This is pretty much exactly why the terraform_aws_asg
module has a enable_alb
parameter).
Complex Structures and Zero Values
Complex structures are not necessarily good candidates for zero values, even though it may seem like a good idea. But by defining a complex structure in a resource, you are by nature supplying it a non-zero value, even if most of the fields you supply are empty.
Most resources don’t handle this scenario gracefully, so it’s best to avoid using complex structures in a scenario where you may be designing a module for re-use, and expect that you won’t be using the functionality defined by such a structure often.
The Application in Brief
As our focus in this article is on Terraform modules, and not on other parts of the pattern such as using Packer or Chef to build an AMI, we will only touch up briefly on the non-Terraform parts of this project, so that we can focus on the Terraform code and the AWS resources that it is setting up.
The Gem
The Ruby gem in this project is a small “hello world” application running with Sinatra. This is self-contained within this project and mainly exists to give us an artifact to put on our base AMI to send to the auto-scaling group.
The server prints out the system’s hostname when fetched. This will allow us to see each node in action as we boot things up.
Packer
The built gem is loaded on to an AMI using Packer, for which the code is contained within packer/ami.json
. We use chef-solo as a provisioner, which works off a self-contained cookbook named packer_payload
in the cookbooks
directory. This allows us a bit more of a higher-level workflow than we would have simply with shell scripts, including the ability to better integration test things and also possibly support multiple build targets.
Note that the Packer configuration takes advantage of a new Packer 0.12.0 feature that allows us to fetch an AMI to use as the base right from Packer. This is the source_ami_filter
directive. Before Packer 0.12.0, you would have needed to resort to a helper, such as ubuntu_ami.sh
, to get the AMI for you.
The Rakefile
The Rakefile
is the build runner. It has tasks for Packer (ami
), Terraform (infrastructure
), and Test Kitchen (kitchen
). It also has prerequisite tasks to stage cookbooks (berks_cookbooks
), and Terraform modules (tf_modules
). It’s necessary to pre-fetch modules when they are being used in Terraform – normally this is handled by terraform get
, but the tf_modules
task does this for you.
It also handles some parameterization of Terraform commands, which allows us to specify when we want to perform something else other than an apply
in Terraform, or use a different configuration.
All of this is in addition to standard Bundler gem tasks like build
, etc. Note that install
and release
tasks have been explicitly disabled so that you don’t install or release the gem by mistake.
The Terraform Modules
Now that we have that out of the way, we can talk about the fun stuff!
As mentioned at the start of the article, This project has 4 different Terraform modules. Also as mentioned, one of them (the Security Group module) is hidden from the end user, as it is consumed by two of the parent modules to create security groups to work with. This exploits the fact that Terraform can, of course, nest modules within each other, allowing for any level of re-usability when designing a module layout.
The AWS VPC Module
The first module, terraform_aws_vpc
, creates not only a VPC, but also public subnets as well, complete with route tables and internet gateway attachments.
We’ve already hidden a decent amount of complexity just by doing this, but as an added bonus, redundancy is baked right into the module by distributing any network addresses passed in as subnets to the module across all availability zones available in any particular region via the aws_availability_zones
data source. This process does not require previous knowledge of the zones available to the account.
The module passes out pertinent information, such as the VPC ID, the ID of the default network ACL, the created subnet IDs, the availability zones for those subnets as a map, and the ID of the route table created.
The ALB Module
The second module, terraform_aws_alb
allows for the creation of AWS Application Load Balancers. If all you need is the defaults, use of this module is extremely simple, creating an ALB that will answer requests on port 80. A default target group is also created that can be used if you don’t have anything else mapped, but we want to use this with our auto-scaling group.
The Auto Scaling Module
The third module, terraform_aws_asg
, is arguably the most complex of the three that we see in the sample configuration, but even at that, its required options are very slim.
The beauty of this module is that, thanks to all the aforementioned logic, you can attach more than one ASG to the same ALB with different path patterns (mentioned below), or not attach it to an ALB at all! This allows this same module to be used for a number of scenarios. This is on top of the plethora of options available to you to tune, such as CPU thresholds, health check details, and session stickiness.
Another thing to note is how the AMI for the launch configuration is being fetched from within this module. We work off the tag that we used within Packer, which is supplied as a module variable. This is then searched for within the module via an aws_ami
data source. This means that no code or variables need to change when the AMI is updated – the next Terraform run will pick up the most recent AMI with the tag.
Lastly, this module supports the rolling update mechanism laid out by Paul Hinze in this post oh so long ago now. When a new AMI is detected and the auto-scaling group needs to be updated, Terraform will bring up the new ASG, attach it, wait for it to have minimum capacity, and then bring down the old one.
The Security Group Module
The last module to be mentioned, terraform_aws_security_group
, is not shown anywhere in our example, but is actually used by the ALB and ASG modules to create Security Groups.
Not only does it create security groups though – it also allows for the creation of 2 kinds of ICMP allow rules. One for all ICMP, if you so choose, but more importantly, allow rules for ICMP type 3 (host unreachable) are always created, as this is how path MTU discovery works. Without this, we might end up with unnecessarily degraded performance.
Give it a Shot
After all this talk about the internals of the project and the Terraform code, you might be eager to bring this up and see it working. Let’s do that now.
Assuming you have the project cloned and AWS credentials set appropriately, do the following:
- Run
bundle install --binstubs --path vendor/bundle
to load the project’s Ruby dependencies. - Run
bundle exec rake ami
. This builds the AMI. - Run
bundle exec rake infrastructure
. This will deploy the project.
After this is done, Terraform should return a alb_hostname
value to you. You can now load this up in your browser. Load it once, then wait about 1 second, then load it again! Or even better, just run the following in a prompt:
while true; do curl http://ALBHOST/; sleep 1; done
And watch the hostname change between the two hosts.
Tearing it Down
Once you are done, you can destroy the project simply by passing a TF_CMD
environment variable in to rake with the destroy
command:
TF_CMD=destroy bundle exec rake infrastructure
And that’s it! Note that this does not delete the AMI artifact, you will need to do that yourself.
More Fun
Finally, a few items for the road. These are things that are otherwise important to note or should prove to be helpful in realizing how powerful Terraform modules can be.
Tags
You may have noticed the modules have a project_path
parameter that is filled out in the example with the path to the project in GitHub. This is something that I think is important for proper AWS resource management.
Several of our resources have machine-generated names or IDs which make them hard to track on their own. Having a easy-to-reference tag alleviates that. Having the tag reference the project that consumes the resource is even better – I don’t think it gets much clearer than that.
SSL/TLS for the ALB
Try this: create a certificate using Certificate Manager, and change the alb
module to the following:
module "alb" { source = "github.com/paybyphone/terraform_aws_alb?ref=v0.1.0" listener_subnet_ids = ["${module.vpc.public_subnet_ids}"] listener_port = "443" listener_protocol = "HTTPS" listener_certificate_arn = "arn:aws:acm:region:account-id:certificate/certificate-id" project_path = "${var.project_path}" }
Better yet, see the example here. This can be run with the following command:
TF_DIR=terraform/with_ssl bundle exec rake infrastructure
And destroyed with:
TF_CMD=destroy TF_DIR=terraform/with_ssl bundle exec rake infrastructure
You now have SSL for your ALB! Of course, you will need to point DNS to the ALB (either via external DNS, CNAME records, or Route 53 alias records – the example includes this), but it’s that easy to change the ALB into an SSL load balancer.
Adding a Second ASG
You can also use the ASG module to create two auto-scaling groups.
module "autoscaling_group_foo" { source = "github.com/paybyphone/terraform_aws_asg?ref=v0.1.1" subnet_ids = ["${module.vpc.public_subnet_ids}"] image_tag_value = "vancluever_hello" enable_alb = "true" alb_listener_arn = "${module.alb.alb_listener_arn}" alb_rule_number = "100" alb_path_patterns = ["/foo/*"] alb_service_port = "4567" project_path = "${var.project_path}" } module "autoscaling_group_bar" { source = "github.com/paybyphone/terraform_aws_asg?ref=v0.1.1" subnet_ids = ["${module.vpc.public_subnet_ids}"] image_tag_value = "vancluever_hello" enable_alb = "true" alb_listener_arn = "${module.alb.alb_listener_arn}" alb_rule_number = "101" alb_path_patterns = ["/bar/*"] alb_service_port = "4567" project_path = "${var.project_path}" }
There is an example for the above here. Again, run it with:
TF_DIR=terraform/multi_asg bundle exec rake infrastructure
And destroy it with:
TF_CMD=destroy TF_DIR=terraform/multi_asg bundle exec rake infrastructure
You now have two auto-scaling groups, one handling requests for /foo/*
, and one handling requests for /bar/*
. Give it a go by reloading each URL and see the unique instances you get for each.
Acknowledgments
I would like to take a moment to thank PayByPhone for allowing me to use their existing Terraform modules as the basis for the publicly available ones at https://github.com/paybyphone. Writing this article would have been a lot more painful without them!
Also thanks to my editors, Anthony Elizondo and Andrew Langhorn for for their feedback and help with this article, and the AWS Advent Team for the chance to stand on their soapbox for my 15 minutes! 🙂
[…] Last November, I wrote an article for the 2016 AWS Advent, which you can find here and here. […]
LikeLike