AWS Basics Using CloudFormation (Part 3) – ELB and EC2

This is the third part of a 3-part article covering the basics of AWS through using CloudFormation. For the first part of this article, click here, and for the second, click here.

This is the third and final part in my AWS basics article. So far, I’ve covered CloudFormation and Amazon VPC. This time, I will cover Elastic Load Balancing (ELB), and Amazon EC2, the actual operational pieces that end up getting deployed and serve the web content. The final product is a basic virtual datacenter that load balances across two web servers, deployable with a single command through CloudFormation.

And once more, if you would like to follow along at home, remember to check out the template on the GitHub project.

Elastic Load Balancing (ELBs)

Elastic Load Balancing is AWS’s layer 7 load balancing component of EC2, facilitating the basic application redundancy features that most modern applications need today.

ELB has a feature set that is pretty much what could be expected from a traditional layer 7 load balancer, such as SSL offloading, health checks, sticky sessions, and what not. However, the real fun in using ELB is in what it does to make the job of infrastructure management easier.

As a completely integrated platform service, ELBs are automatically redundant, and can span multiple availability zones without much extra configuration. Metrics and logging are also built in, and can be sent to S3 or CloudWatch.

Other than that, there is not much to really hype up about ELB. Not to say that is a bad thing! So on with the CloudFormation entries.

ELBs in CloudFormation

After the gauntlet I ran with explaining the VPC entries in the sample CloudFormation stack, the ELB entry will be a breeze. Below is the ELB section.

"VCTSLabELB1": {
  "Type": "AWS::ElasticLoadBalancing::LoadBalancer",
  "Properties": {
     "HealthCheck": {
       "HealthyThreshold": "2",
       "Interval": "5",
       "Target": "HTTP:80/",
       "Timeout": "3",
       "UnhealthyThreshold": "2"
     },
     "Listeners": [{
         "InstancePort": "80",
         "InstanceProtocol": "HTTP",
         "LoadBalancerPort": "80",
         "Protocol": "HTTP"
     }],
     "Scheme": "internet-facing",
     "Subnets": [ { "Ref": "VCTSLabSubnet1" } ],
     "SecurityGroups": [ { "Ref": "VCTSElbSecurityGroup" } ],
     "Instances": [
       { "Ref": "VCTSLabSrv1" },
       { "Ref": "VCTSLabSrv2" }
     ],
     "Tags": [ { "Key": "resclass", "Value": "vcts-lab-elb" } ]
  }
}

The resource is of the AWS::ElasticLoadBalancing::LoadBalancer type. It is an internet-facing load balancer (as defined by Scheme), as opposed to an internal load balancer that would only be visible within the VPC. It’s also associated with the VCTSLabSubnet1 subnet, so that it can have public access, it does not affect the instances that it can connect to. The instances are defined in the Instances property, which contain references to the two named instances in the EC2 section of the template.

Health checking

The HealthCheck property marks an individual service as healthy (defined by HealthyThreshold) after 2 checks, which brings it back into the cluster; subsequently the health check will also mark a service as unhealthy after 2 failures (defined by UnhealthyThreshold). Note that although this is okay for the purpose that I am using it for, intermittent service failures may cause an undesirable flapping when thresholds are set this low. In that event, set HealthyThreshold to a value that ensures there have been enough successful checks to reasonably determine that the service is available.

Timeout controls how long to wait before marking an individual service as down if a response has not been received. Interval is the time to wait between checks. Both of these values are in seconds. In the example above, the health check waits 3 seconds before marking a service as failed, and the health check itself runs every 5 seconds.

The health check Target takes the syntax of SERVICE:PORT/urlpath. SERVICE can be one of TCP, SSL, HTTP, and HTTPS. /urlpath is only available for the last two (the first two being simple connect open checks and lacking any protocol awareness other than SSL). Also, the response to /urlpath needs to be a 200 OK response – anything else (even a 300 Redirect class code) is considered a failure. In the example above, a check against / over HTTP will be done on any EC2 instances to be sure that the service is up.

Listeners

The listener describes how clients connect to the load balancer and how those connects are routed to instances.

Here, connections come in to port 80 (defined by LoadBalancerPort) and are handled as HTTP connections (defined by Protocol). There are implications from this; namely the X-Forwarded-For HTTP header will be passed, and the connection is statefully passed across as a proxy. Use of HTTP on the front end also means that HTTP or HTTPS needs to be used on the back end. This is indeed the case; the listener is configured to send traffic to instances via HTTP on port 80 (defined by InstanceProtocol and InstancePort).

There are topics that are not covered in this article; namely having to do with SSL offloading (ie: using HTTPS as the front end or instance protocols), persistence, and back-end authentication. It would be wise to check out the Listeners for Your Load Balancer section of the ELB manual to get an idea of all available configurations for listeners.

EC2

There was a time, albeit a long time ago, that AWS was simply EC2 and not much else. Although, it should be noted that SQS was the first AWS service; Jeff Barr’s article on his first 12 years at Amazon is a good read on the launch dates of SQS, EC2, and S3.

Even in the face of today’s AWS massive platform service portfolio, I personally think it’s safe to say that EC2 still has a very major place at Amazon. It serves as the building block for services like ECS (AWS’s Docker service); the EC2 instances that make an ECS pool are, as of this writing, still visible to the end user and require some degree of management. Custom workloads may not fit the bill for use on zero-administration platforms like Lambda. Managed service providers that run their customers off AWS will have a need for the service for quite a long time to come.

EC2 is Amazon’s most basic building block, and the product that gave “Cloud Computing” its name (its acronym itself standing for Elastic Compute Cloud). It is a Xen-based virtualization platform, with features that in today’s world we now take for granted, such as host redundancy and per-use billing, to just name a couple. It set the standard for how a cloud platform handles instances – virtual machines are first rolled into base units called images (which under AWS is called an AMI, standing for Amazon Machine Image), from which instances are created with their own storage laid on top of it.

This small overview does not do the service justice, and there is no way that I would be able to cover all of EC2’s features in this document without losing sight of the goal of setting up a basic VPC with CloudFormation. I would recommend the EC2 documentation for coverage on these topics, in addition to watching this space, where I will more than likely cover these topics as need be.

EC2 in CloudFormation

And now, finally, I come to the last section in this part of the series – the EC2 section of the sample CloudFormation template.

Below is the definition of one of the two EC2 instances that are set up in the template, not counting the NAT instance.

"VCTSLabSrv1": {
  "Type": "AWS::EC2::Instance",
  "Properties": {
    "ImageId": { "Fn::FindInMap": [ "RegionMap", { "Ref": "AWS::Region" }, "AMI" ] },
    "InstanceType": "t2.micro",
    "KeyName": { "Ref": "KeyPair" },
    "SubnetId": { "Ref": "VCTSLabSubnet2" },
    "SecurityGroupIds": [ { "Ref": "VCTSPrivateSecurityGroup" } ],
    "Tags": [ { "Key": "resclass", "Value": "vcts-lab-srv" } ],
    "UserData": { "Fn::Base64": { "Fn::Join": [ "", [
      "#!/bin/bash -xen",
      "/usr/bin/yum -y updaten",
      "/usr/bin/yum -y install httpdn",
      "/sbin/chkconfig httpd onn",
      "echo '<html><head></head><body>vcts-lab-srv1</body></html>' > /var/www/html/index.htmln",
      "echo "/opt/aws/bin/cfn-signal -e $? ",
      "  --stack ", { "Ref": "AWS::StackName" }, " ",
      "  --resource VCTSLabSrv1 ",
      "  --region ", { "Ref": "AWS::Region" }, " ",
      "  && sed -i 's#^/opt/aws/bin/cfn-signal .*\$##g' ",
      "  /etc/rc.local" >> /etc/rc.localn",
      "/sbin/rebootn"
    ]]}}
  },
  "CreationPolicy" : { "ResourceSignal" : { "Count" : 1, "Timeout" : "PT10M" } },
  "DependsOn": "VCTSLabNatGw"
}

EC2 instances are defined as the AWS::EC2::Instance instance type. The instance type is t2.micro, the smallest of the newer generation T2 instance types. Also, remember from the Mappings part of the CloudFormation section that the actual AMI to use is selected from the RegionMap map, based off the availability zone that this instance is launched in.

The KeyName is chosen from the supplied key name when the CloudFormation template was launched (it was either supplied on the command line or through the CloudFormation web interface).

The subnet (specified by SubnetId) is VCTSLabSubnet2, the private subnet, along with its SecurityGroupIds, which is in this case is the VCTSPrivateSecurityGroup private subnet security group (which is simply an allow all, as this group will have no internet access and will be interfacing with the NAT instance and the ELB).

Using userdata for post-creation work

The section after all the other aforementioned properties is where some of the real magic happens. The UserData property is used to create a post-installation shell script that updates the system (/usr/bin/yum update), installs apache (/usr/bin/yum -y install httpd), enables the service (/sbin/chkconfig httpd on), creates an index.html page with the server ID, and then finally injects a self-destructing cfn-signal command that gets run when the server reboots. This is a very simple way to get a fully deployed server in our example.

Note that there is a more complex configuration management system built right right into CloudFormation if using something more complex like Chef, Puppet, Ansible, Salt, or whatever is not possible. Check out AWS::CloudFormation::Init. Incidentially, this requires the cfn-init command be launched, which is not necessarily installed on all Linux AMIs (however is available usually thru pacakges, and is already on the system with Amazon Linux). Incidentally as well, cfn-init is generally launched through user data.

Finally, also note that user data needs to be base64 encoded – this is done by the Fn::Base64 section in the example.

Creation policies and dependencies

The last little bit that needs to be mentioned in regards to the EC2 instances are the creation policies and dependencies attached to them. These are not unique to EC2 instances (and hence, they are not properties of that specific resource type, as can be seen from their scope).

Consider the following scenario: the NAT instance has generally the same UserData as the EC2 instances – it updates and reboots as well. During the period that the NAT instance is rebooting, internet access will be unavailable to the 2 web instances in the private subnet. If all 3 instances were set to install at the same time without the web instances waiting for the NAT instance, it is plausible that there would be a time where the web instances would be attempting updates while the NAT instance was rebooting. This would, of course, break updates, and possibly the creation of the CloudFormation stack.

This is what creation policies and dependencies are for. Generally, when using user data, one does not want to count a resource as created until everything is done. In this case, that means the instance has had all of its software updated, any other software installed that it needs (ie: in the event of the web instances), and has been fully rebooted.

The CreationPolicy defined above waits until that is all done. Ultimately, by what it is defined there, it waits for one cfn-signal command to be run for the resource (defined by ResourceSignal), with a 10 minute Timeout (if the format looks weird, it’s because it is in ISO 8601 format). This gives the node enough time to fully update and restart.

And finally, the DependsOn attribute ties the web instances to the NAT instance. This will ensure CloudFormation waits until the NAT instance (referring to it by its named resource, VCTSLabNatGw) has completed creation and received its own cfn-signal before even attempting to create them, giving us an error-free template!

Conclusion

This concludes the intro article. I hope that you found the material informative!

Watch this space for much more in the way of coverage of AWS services as I continue my “world tour”. Not going to say 100% about what is next, but more than likely Route 53 will be on the radar shortly, as possibly will be an introduction to Identity and Access Management and Security Token Service, as both of the latter services are pretty important when organizing security on an AWS account these days, and there is a lot to digest.

See you then!

Advertisement

AWS Basics Using CloudFormation (Part 2) – Amazon VPC

This is the second part of a 3-part article covering the basics of AWS through using CloudFormation. For the first part of this article, click here.

Last week I covered the basics of CloudFormation – giving an idea of how a template is generally structured and all of the specific elements. This time, I am going into detail on how to set up Amazon VPC. Again, Amazon VPC is the first part of almost any AWS deployment – think of it as the datacenter and network layer. Not much else can happen unless these two items are present, can they? There are a few exceptions to this rule, such as Route 53, Amazon’s DNS hosting service, but even Route 53 can be provisioned in a VPC to provide private hosted zones that are not externally available.

And again, if you would like to follow along at home, remember to check out the template on the GitHub project.

Regions and Availability Zones

Two key concepts that should be explained while discussing VPCs are the concepts of regions and availability zones.

As a modern computing infrastructure that serves a wide variety of clients, AWS datacenters are spread throughout the world. These are known as regions. Examples of the regions would be the AWS datacenters in Virginia (us-east-1), North California (us-west-1), and Oregon (us-west-2).

Within each of these regions are availability zones, which according to the EC2 FAQ, are so separated from each other that even physical failures such as power outages or even fire at one availability zone would not affect others.

VPCs can span availability zones but not regions. In order to interconnect regions, a peer needs to be set up, or the VPCs need to be connected via other means, such as using a VPN.

Private and Public Networks

It takes a little getting used to how private and public networks work in a VPC if one is used to engineering their own networks.

First off, the difference between a private and a public subnet is very subtle – public subnets have an internet gateway attached to them, and instances require a public IP address attached to them to be able to get internet access. Instances that do not have public IPs cannot access the internet on their own, regardless if they are in a public subnet.

To solve the problem, one can use a NAT instance. The concept of this explained below. What I do not cover are some of the other considerations that need to be thought of because of this “manual” process, such as redundancy and security of the network. Hopefully, Amazon will consider automating this piece of the infrastructure soon, as it is the single manual setup element in the VPC platform and hence probably the one prone to failure the most.

Advanced VPC topics

I do not cover more in-depth VPC topics here, however due to the scale of some of Amazon’s customers, it is only natural that they would have a wide range of options for connecting enterprise networks to a VPC.

Consult the Network Administrator Guide for help on these integration topics (such as using VPNs, or routing protocols like BGP).

VPCs in CloudFormation

As there are a lot of VPC components in the CloudFormation template, I have broken it up some of the items into respective sub-sections.

The root VPC resource is defined as an AWS::EC2::VPC resource:

"VCTSLabVPC1": {
  "Type": "AWS::EC2::VPC",
  "Properties": {
    "CidrBlock": "10.0.0.0/16",
    "EnableDnsHostnames": true,
    "Tags": [ { "Key": "resclass", "Value": "vcts-lab-vpc" } ]
  }
}

A couple of other things to note here: CidrBlock cannot be any bigger than a /16, so if you get errors mentioning something about the network address being invalid, try reducing the network size. EnableDnsHostnames allows DNS hosts to be assigned to instances as they start up in this VPC – having this off may be useful if DNS will be managed outside of AWS, but otherwise it’s generally a good idea to enable this.

Subnets, gateways, and routes

There are a couple of examples on subnets in the CloudFormation template, since it makes use of a NAT instance as well. This section will discuss the default (public) subnet to start with, and I will expand into the private subnet during the NAT instance section.

"VCTSLabSubnet1": {
"Type": "AWS::EC2::Subnet",
  "Properties": {
    "CidrBlock": "10.0.0.0/24",
    "MapPublicIpOnLaunch": true,
    "Tags": [
      { "Key": "resclass", "Value": "vcts-lab-subnet" },
      { "Key": "subnet-type", "Value": "public" }
    ],
    "VpcId": { "Ref": "VCTSLabVPC1" }
  }
},
"VCTSLabGateway": {
  "Type": "AWS::EC2::InternetGateway",
  "Properties": {
    "Tags": [ { "Key": "resclass", "Value": "vcts-lab-gateway" } ]
  }
},
"VCTSLabGatewayAttachment": {
  "Type": "AWS::EC2::VPCGatewayAttachment",
  "Properties": {
    "InternetGatewayId": { "Ref": "VCTSLabGateway" },
    "VpcId": { "Ref": "VCTSLabVPC1" }
  }
},
"VCTSLabPublicRouteTable": {
  "Type": "AWS::EC2::RouteTable",
  "Properties": {
    "VpcId": { "Ref": "VCTSLabVPC1" },
    "Tags": [
      { "Key": "resclass", "Value": "vcts-lab-routetable" },
      { "Key": "routetable-type", "Value": "public" }
    ]
  }
},
"VCTSLabPublicDefaultRoute": {
  "Type": "AWS::EC2::Route",
  "Properties": {
    "DestinationCidrBlock": "0.0.0.0/0",
    "GatewayId": { "Ref": "VCTSLabGateway" },
    "RouteTableId": { "Ref": "VCTSLabPublicRouteTable" }
  }
},
"VCTSLabPublicSubnet1Assoc": {
  "Type": "AWS::EC2::SubnetRouteTableAssociation",
  "Properties": {
    "SubnetId": { "Ref": "VCTSLabSubnet1" },
    "RouteTableId": { "Ref": "VCTSLabPublicRouteTable" }
  }
}

There are a lot of things going on here. First, the subnet is defined with the AWS::EC2::Subnet resource type. The MapPublicIpOnLaunch property makes this the de facto public subnet, as anything launched in this subnet will get a public IP address. However, this is only part of the story, as without a gateway, public IP address assignments will not be possible or functional.

To that end, various routing resoruces are created: a gateway (type AWS::EC2::InternetGateway), a route table (AWS::EC2::RouteTable), and a default route (resource type AWS::EC2::Route). This effectively makes a route table with a default route, however it also needs to be attached to the gateway that is created and the subnet. This is done by using the AWS::EC2::VPCGatewayAttachment and AWS::EC2::SubnetRouteTableAssociation resources.

At this point, the subnet is now ready for use, however, without security policies may be not of much use, or extremely insecure.

Security groups

Shown below is the CloudFormation resource for the public subnet’s security group. This is defined as type AWS::EC2::SecurityGroup, and is named for the fact that it will mainly run on the NAT instance only.

"VCTSNatSecurityGroup": {
  "Type": "AWS::EC2::SecurityGroup",
  "Properties": {
    "Tags": [ { "Key": "resclass", "Value": "vcts-lab-sg" } ],
    "GroupDescription": "NAT (External) VCTS security group",
    "VpcId": { "Ref": "VCTSLabVPC1" },
    "SecurityGroupIngress": [
      { "IpProtocol": "tcp", "CidrIp": "10.0.1.0/24", "FromPort": "80", "ToPort": "80" },
      { "IpProtocol": "tcp", "CidrIp": "10.0.1.0/24", "FromPort": "443", "ToPort": "443" },
      { "IpProtocol": "tcp", "CidrIp": { "Ref": "SSHAllowIPAddress" }, "FromPort": "22", "ToPort": "22" }
    ],
    "SecurityGroupEgress": [
      { "IpProtocol": "tcp", "CidrIp": "0.0.0.0/0", "FromPort": "22", "ToPort": "22" },
      { "IpProtocol": "tcp", "CidrIp": "0.0.0.0/0", "FromPort": "80", "ToPort": "80" },
      { "IpProtocol": "tcp", "CidrIp": "0.0.0.0/0", "FromPort": "443", "ToPort": "443" }
    ]
  }
}

Access rules are defined through the SecurityGroupIngress (inbound) and SecurityGroupEgress (outbound) properties. Here, only SSH traffic is being allowed inbound, from the IP address supplied via the SSHAllowIPAddress parameter. SSH is also being allowed general outbound – this allows the ability to “bounce” off the NAT instance into the private network. HTTP and HTTPS are being allowed both inbound and outbound generally – this might be confusing at first, seeing as there is some redundancy with this and the (not shown) load balancer access list, but consider the fact that the NAT instance has to handle traffic going in both directions – so the HTTP will need to come in to the NAT instance and the back out to the internet, so the access list has to allow for both directions.

Two security groups are not shown here – the load balancer group (named VCTSElbSecurityGroup) and the private group (named VCTSPrivateSecurityGroup). The former simply allows HTTP traffic generally, and the latter is a general allow for any traffic flowing into private instances.

One last thing to note – keep in mind that there are two kinds of network security concepts in a VPC – security groups, as shown here, and network ACLs, which I do not discuss. The former are applied at the instance level, and the latter are applied at the subnet level. At the very least, there should be some security applied on the instance level, however, adding the network ACLs can allow for some fallback barring that. As an example, the private subnet is very loose, and could benefit from an ACL being applied to it, just in case a public address got assigned to it (even though, as the CloudFormation template is currently set up, that is impossible). The VPC Security Comparison document gives a great breakdown on the differences.

NAT instances

Below are the CloudFormation template snippets for the NAT instance. There is some information overlap here, as the subnets and route table items have mainly been explained already, and the network security group has already been shown above, so it is not being shown again. So given that, I will just show the relevant subnet and route entries, and then briefly explain the NAT instance itself, which is an EC2 instance, and of course I will be describing EC2 in detail later on.

Here are the private subnet and routes. Note how MapPublicIpOnLaunch is off. Also, there is a hook into the availability zone that the public subnet is in, as load balancing can break if the private and public subnets are inconsistently created in different availability zones. Also, the private subnet uses the NAT instance as the default route.

"VCTSLabSubnet2": {
"Type": "AWS::EC2::Subnet",
  "Properties": {
    "CidrBlock": "10.0.1.0/24",
    "MapPublicIpOnLaunch": false,
    "Tags": [
      { "Key": "resclass", "Value": "vcts-lab-subnet" },
      { "Key": "subnet-type", "Value": "private" }
    ],
    "VpcId": { "Ref": "VCTSLabVPC1" },
    "AvailabilityZone": { "Fn::GetAtt" : [ "VCTSLabSubnet1", "AvailabilityZone" ] }
  }
},
"VCTSLabPrivateRouteTable": {
  "Type": "AWS::EC2::RouteTable",
  "Properties": {
    "VpcId": { "Ref": "VCTSLabVPC1" },
    "Tags": [
      { "Key": "resclass", "Value": "vcts-lab-routetable" },
      { "Key": "routetable-type", "Value": "private" }
    ]
  }
},
"VCTSLabPrivateDefaultRoute": {
  "Type": "AWS::EC2::Route",
  "Properties": {
    "DestinationCidrBlock": "0.0.0.0/0",
    "InstanceId": { "Ref": "VCTSLabNatGw" },
    "RouteTableId": { "Ref": "VCTSLabPrivateRouteTable" }
  }
},
"VCTSLabPrivateSubnet2Assoc": {
  "Type": "AWS::EC2::SubnetRouteTableAssociation",
  "Properties": {
    "SubnetId": { "Ref": "VCTSLabSubnet2" },
    "RouteTableId": { "Ref": "VCTSLabPrivateRouteTable" }
  }
}

And here is the NAT instance itself. Again, I am not describing it here in detail, as it is an EC2 instance and will be explained in its own section. However, do note that the AMI does map to a list of Amazon Linux instances specifically configured for NAT use – there are scripts on these instances that set up the NAT table and IP forwarding. Also, the NAT instance does not have 2 interfaces in each of the subnets, like a traditional router – traffic flows in through the VPC’s own routers in a way that only an IP in the public subnet is required.

"VCTSLabNatGw": {
  "Type": "AWS::EC2::Instance",
  "Properties": {
    "ImageId": { "Fn::FindInMap": [ "NatRegionMap", { "Ref": "AWS::Region" }, "AMI" ] },
    "InstanceType": "t2.micro",
    "KeyName": { "Ref": "KeyPair" },
    "SubnetId": { "Ref": "VCTSLabSubnet1" },
    "SourceDestCheck": false,
    "SecurityGroupIds": [ { "Ref": "VCTSNatSecurityGroup" } ],
    "Tags": [ { "Key": "resclass", "Value": "vcts-lab-natgw" } ],
    "UserData": { "Fn::Base64": { "Fn::Join": [ "", [
      "#!/bin/bash -xen",
      "/usr/bin/yum -y updaten",
      "echo "/opt/aws/bin/cfn-signal -e $? ",
      "  --stack ", { "Ref": "AWS::StackName" }, " ",
      "  --resource VCTSLabNatGw ",
      "  --region ", { "Ref": "AWS::Region" }, " ",
      "  && sed -i 's#^/opt/aws/bin/cfn-signal .*\$##g' ",
      "  /etc/rc.local" >> /etc/rc.localn",
      "/sbin/rebootn"
    ]]}}
  },
  "CreationPolicy" : { "ResourceSignal" : { "Count" : 1, "Timeout" : "PT10M" } }
}

Next Article – ELB and EC2

Stay tuned for the conclusion of this 3-part article, were I discuss setting up ELB and EC2 and their respective items in the CloudFormation template!

AWS Basics Using CloudFormation (Part 1) – Introduction to CloudFormation

This article is the first in many – as mentioned in the last article, I will be writing more articles over the course of the next several months on AWS, touching on as much of the service as I can get my hands on.

For my first article, I am starting with the basics – CloudFormation, Amazon VPC (Virtual Private Cloud), Elastic Load Balancing, and finally, EC2. The services that are covered here serve as some of the basic building blocks of an Amazon infrastructure, and some of the oldest components of AWS. This will serve as a entry point not only into further articles, but for myself, and you the reader, into learning more about AWS, and being more comfortable with the tools that manage it.

However, this article got so large that I have had to separate it into 3 parts! so, for the first article, I will be mainly covering CloudFormation, the second one will cover VPC, and the third one will cover ELB and EC2.

Viewing the Technical Demo

All of the items covered in this article have been assembled into a CloudFormation template that can be downloaded from the github page:

https://github.com/vancluever/aws-basics-using-cloudformation

There is a README there that provides instructions on how to download and use the template.

Introduction

I selected the first features of AWS to cover from a way that could give someone that is already familiar with the basic concepts of modern cloud computing and devops (which includes virtual infrastructure, automation, and configuration management) an idea of what that means when dealing with AWS and its products. Ultimately, this meant creating an example that would create a full running basic “application” that could be created and destroyed with a single command.

CloudFormation is Amazon’s primary orchestration product, and covers a wide range of services that make up the core of AWS’s infrastructure. It is used in this article to manage every service I touch – besides IAM and access keys, which are not covered here, nothing in this example has been set up through the AWS console. Given that the aforementioned two items have been set up, all that is necessary to create the example is a simple aws cloudformation CLI command.

Amazon VPC is the modern (and current) virtual datacenter platform that makes up the base of AWS. From a VPC, networks, gateways, access lists, and peer connections (such as VPN endpoints and more) are made to cover both the needs of a public-facing application and the private enterprise. It is pretty much impossible to have a conversation about AWS these days without using VPC.

Amazon EC2 is one of Amazon’s oldest and most important products. It is the solution that gave “the cloud” its name, and while Amazon has created a large number of platform services that have removed the need for EC2 in quite a few applications (indeed, one can run an entire application these days in AWS without a single EC2 instance), it is still highly relevant, and will continue to be so long as there is ever a need to run a server and not a service. Products such as VPC NAT instances (covered in part 2) and Amazon EC2 Container Service (not covered here) also use EC2 directly with no transparency, so its importance in the service are still directly visible to the user.

I put these three products together in this article – with CloudFormation, a VPC is created. This VPC has two subnets, a public subnet and a private subnet, along with a NAT instance, so that one can see some of the gotchas that can be encountered when setting up such infrastructure (and hopefully avoid some of the frustration that I experienced, mentioned in the appropriate section). An ELB is also created for two EC2 instances that will, upon creation, do some basic configuration to make themselves available over HTTP and serve up a simple static page that allows one to see both the ELB and EC2 instances in action.

CloudFormation

CloudFormation is Amazon’s #1 infrastructure management service. With features that cover both deployment and configuration management, the service supports over two dozen AWS products, and can be extended to support external resources (and AWS processes not directly supported by CloudFormation) via custom resources.

One does not necessarily need to start off with CloudFormation completely from scratch. There are templates available at the AWS CloudFormation Templates page that have both examples of full stacks and individual snippets of various AWS services, which can be a great time saver in building custom templates.

The following few sections cover CloudFormation elements in further detail. It is a good idea to consult the general CloudFormation User Guide, which provides a supplemental to the information below, and also a good reference while designing templates, be it starting from scratch or using existing templates.

CloudFormation syntax synopsis

Most CloudFormation items (aside from the root items like template version and description) can be summarized as being an name/type pairing. Basically, given any certain type, be it parameters, resources, mappings, or anything else, items in CloudFormation generally are assigned a unique name, and then a type. Consider the following example parameter:

"Parameters": {
  "KeyPair": {
    "Type": "AWS::EC2::KeyPair::KeyName",
    "Description": "SSH key that will be used for EC2 instances (set up in web console)",
    "ConstraintDescription": "needs to be an existing EC2 keypair (set up in web console)"
  }
}

This parameter is a AWS::EC2::KeyPair::KeyName parameter named KeyPair. The latter name can (and will be) referenced in resources, like the EC2 instance names (see the below section on EC2).

Look in the below sections for CloudFormation’s Ref function, which will be used several times; this function serves as the basis for referencing several kinds of CloudFormation elements, not just parameters.

Parameters and Outputs

Parameters are how data gets in to a CloudFormation template. This can be used to do things like get IDs of SSH keys to assign to instances (as shown above), or IP addresses to assign to security group ACLs. These are the two items parameters are used for in the example.

Outputs are how data gets out of CloudFormation. Data that is a useful candidate for being published through outputs include instance IP addresses, ELB host names, VPC IDs, and anything else that may be useful to a process outside of CloudFormation. This data can be read thru the UI, or through the JSON data produced by the aws cloudformation describe-stacks CLI command (and probably the API as well).

Parameter syntax

Let’s look at the other example in the CloudFormation template, the SSHAllowIPAddress parameter. This example uses more generic data types and gives a bigger picture as to what is possible with parameters. Note that there are several data types that can be used, which include both typical generic data types, such as Strings and Integers, and AWS-speciifc types such as the AWS::EC2::KeyPair::KeyName parameter used above.

"SSHAllowIPAddress": {
  "Type": "String",
  "AllowedPattern": "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\/32",
  "Description": "IP address to allow SSH from (only /32s allowed)",
  "ConstraintDescription": "needs to be in A.B.C.D/32 form"
}

This parameter is of type String, which also means that the AllowedPattern constraint can be used on it, which is used here to create a dotted quad regular expression, with the /32 netmask being explicitly enforced. JSON/Javascript syntax applies here, which explains the somewhat excessive nature of the backslashes.

Parameters are referenced using the Ref function. The snippet below gives an example of the SSHAllowIPAddress‘s reference:

"VCTSNatSecurityGroup": {
  "Type": "AWS::EC2::SecurityGroup",
  "Properties": {
    ...
    "SecurityGroupIngress": [
      ...
      { "IpProtocol": "tcp", "CidrIp": { "Ref": "SSHAllowIPAddress" }, "FromPort": "22", "ToPort": "22" }
    ],
    ...
  }
}

Ref is a very simple function and usually just used to refer back to a CloudFormation element. It is not just restricted to parameters is used with both parameters, mappings, and resources. Further examples will be given below, so there should be a good idea on how to use it by the end of this article.

Output syntax

Below is the NatIPAddr output, pulled from the example.

"Outputs": {
  "NatIPAddr": {
    "Description": "IP address of the NAT instance (shell to this address)",
    "Value": { "Fn::GetAtt": [ "VCTSLabNatGw", "PublicIp" ] }
  },
  ...
}

The nature of outputs are pretty simple. The data can be pulled any way that allows one to get the needed data. Most commonly, this will be from the Fn::GetAtt function, which can be used to get various attributes from resources, or possibly Ref, which in the event of resources, usually references a specific primary attribute.

Mappings

Mappings allow a CloudFormation template some flexibility. The best example of this is allowing the CloudFormation template to be used in multiple regions, by mapping AMIs (instance images) to their respective regions.

Mapping syntax

This is the one in the reference’s template, and maps to Amazon Linux AMIs. These are chosen because they support cfn-init out of the box, which was going to be used in the CloudFormation template to run some commands via the AWS::CloudFormation::Init resource type in the EC2 section, but I opted to use user data instead (I cover this in further detail in part 3).

"Mappings": {
  "RegionMap": {
    "us-east-1": { "AMI": "ami-1ecae776" },
    "us-west-1": { "AMI": "ami-e7527ed7" },
    "us-west-2": { "AMI": "ami-d114f295" }
  },
  "NatRegionMap": {
    "us-east-1": { "AMI": "ami-303b1458" },
    "us-west-1": { "AMI": "ami-7da94839" },
    "us-west-2": { "AMI": "ami-69ae8259" }
  }
}

The above RegionMap is then referenced in EC2 instances like so:

"VCTSLabNatGw": {
  "Type": "AWS::EC2::Instance",
  "Properties": {
    "ImageId": { "Fn::FindInMap": [ "NatRegionMap", { "Ref": "AWS::Region" }, "AMI" ] },
    "InstanceType": "t2.micro",
    ...
  }
}

This is one of many ways to use mappings, and more complex structures are possible. Check the documentation for further examples (such as how to expand the above map to make use of processor architecture the the region map).

Resources

Resources do the real work of CloudFormation. They create the specific elements of the stack and interface with the parameters, mappings, and outputs to do the work necessary to bring up the stack.

Since resources vary so greatly in what they need in real world examples, I explain each service that the template makes use of in their respective sections (ie: the VPC, ELB, and EC2 sections). However, some common elements are explained here in brief, as to give a primer on how they can be used to further control orchestration of the stack. Again, further detail on how to use these are shown as examples with the various AWS services explained below.

Creation Policies, Dependencies, and Metadata

A CreationPolicy can be used as a constraint to determine when a resource is counted as created. For example, this can be used with cfn-signal on an EC2 instance to ensure that the resource is not marked as CREATE_COMPLETE until all reasonable post-installation work has been done on an instance (for example, after all updates have been applied or certain software has been installed).

A dependency (defined with DependsOn) is a simple association to another resource that ties its creation with said parent. For example, the web server instances in the example do not start creation until the NAT instance is complete, as they are created in a private network and will not install properly unless they have internet access available to them.

Metadata can be used for a number of things. The example commonly explained is the use of the AWS::CloudFormation::Init metadata type to provide data to cfn-init, which is a simple configuration management tool. This is not covered in the example, as the work that is being done is simple enough to be done through UserData.

All of these 3 concepts are touched up on in further detail in part 3, when EC2 and the setup of an instance in CloudFormation is discussed.

Next Article – Amazon VPC

That about covers it for the CloudFormation part of this article. Stay tuned for the next part, in which I cover Amazon VPC basics, in addition to how it is set up in CloudFormation!

AWS NYC Summit 2015 Recap

When one gets the chance to go to New York, one takes it, as far as I’m concerned. So when PayByPhone offered to send me to the AWS NYC Summit, I totally jumped at the chance. In addition to getting to stand on top of two of the world’s tallest buildings, take a bike across the Brooklyn Bridge, and get some decent Times Square shots, I got to learn quite a bit about ops using AWS. Win-win!

The AWS NYC summit was short, but definitely one of the better conferences that I have been to. I did the Taking AWS Operations to the Next Level “boot camp” – a day-long training course – during the pre-day, and attended several of the breakouts the next day. All of them had great value and I took quite a few notes. I am going to try and abridge all of my takeaways on the products, “old” and new, that caught my eye below.

CloudFormation

CloudFormation was a product that was covered in my pre-day course and also during one of the breakouts that I attended. It’s probably the most robust ops product that is offered on AWS today, supporting, from my impressions, the most products versus any other of the automation platform services that are offered.

The idea with CloudFormation, of course, is that infrastructure is codified in a JSON-based template, and then create “stacks” – entities that group infrastructure and platform services up in ways that can be duplicated, destroyed, or even updated with a sort of intelligent convergence, adding, removing, or changing resources depending on what has been defined in the template. Naturally, this can be integrated with any sort of source control so that changes are tracked, and a CI and deployment pipeline to further automate things.

One of the neat features that was mentioned in the CloudFormation breakout was the ability for CloudFormation to use Lambda-backed resources to interface with AWS features that CloudFormation does not have native support for, or even non-AWS products. All of this makes CloudFormation definitely seem like the go-to product if one is planning on using native tools to deploy AWS infrastructure.

OpsWorks

OpsWorks is Amazon’s Chef product, although it’s a quite a bit more than that. It seems mainly like a hybrid of a deployment system like CloudFormation, with Chef being used to manage configuration through several points of the lifecycle. It uses chef-solo and chef-zero depending on the OS it is being employed for (Linux is chef-solo and Chef 11, and Windows is Chef 12 and chef-zero), and since it is all run locally, there is no Chef server.

In OpsWorks, an application stack is deployed using components called Layers. Layers exist for load balancing, application servers and databases, in addition to others such as caching and even custom ones that can utilize functionality that are created through Chef cookbooks. With support for even some basic monitoring, one can probably write an entire application in OpsWorks without even touching another AWS ops tool.

AWS API Gateway

A few new products were announced at the summit – but API Gateway was the one killer app that caught my eye. Ultimately this means that developers do not really need to mess around with frameworks any more to get an API, or even get a web application off the ground – just hook in the endpoints with API gateway, integrate them with Lambda, and it’s done. With the way that AWS’s platform portfolio is looking these days, I’m surprised that this one was so late the party!

CodeDeploy, CodePipeLine, and CodeCommit

These were presented to me in a breakout that gave a bit of a timeline on how Amazon internally developed their own deployment pipelines. Ultimately they segued into these three tools.

CodeDeploy is designed to deploy an application to not only AWS, but also to on-premise resources, and even other cloud providers. The configuration is YAML-based and pretty easy to read. As part of its deployment feature set, it does offer some orchestration facilities, so there is some overlap with some of its other tools. It also integrates with several kinds of source control platforms (ie: GitHub), other CI tools (ie: Jenkins), and configuration management systems. The killer features for me is its support for rolling updates, to automate the deployment of a new infrastructure while draining out the old one.

CodePipeline is a release modeling and workflow engine, and can be used to model a release process, working with CodeDeploy, to automate the deployment of an application from source, to testing, and then to production. Tests can be automated using services like RunScope or Ghost Inspector, to name just a couple. It definitely seems like these two – CodePipeline and CodeDeploy – are naturally coupled to give a very smooth software deployment pipeline – on AWS especially.

CodeCommit is AWS’s foray into source control, a la GitHub, Bitbucket, etc. Aside all the general things that one would hopefully expect from a hosted source control service (ie: at-rest encryption and high availability), expect a few extra AWS-ish perks, like the ability to use existing IAM scheme to assign ACLs to repositories. Unlimited storage per repository was mentioned, or at least implied, but there does appear to be some metering – see here for pricing.

EC2 Container Service (ECS)

The last breakout I checked out was one on EC2 Container Service (ECS). This is AWS’s Docker integration. The breakout itself spent a bit of time on an intro to containers themselves which is out of the scope of this article, but is a topic I may touch up on at a later time (Docker is on “the list” of things I want to evaluate and write on). Docker is a great concept. It rolls configuration management and containerization into one tool for deploying applications and gives a huge return on infrastructure in the form of speed and density. The one unanswered question has been clustering, but there has been several 3rd party solutions for that for a bit and Docker themselves has a solution that is still fairly new.

ECS does not appear to be Swarm, but its own mechanism. But in addition to clustering, ECS works with other AWS services, such as Elastic Load Balancing (ELB), Elastic Block Storage (EBS), and back office things like IAM and CloudTrail. Templates are rolled into entities called tasks, where one can also specify resource requirements, volumes to use, and more. Then, one can use this task to create a run task, which will run for a finite amount of time and then terminate, or a service, which will ensure that the task stays up and running indefinitely. Also, the ability exists to specify a specific instance count for a task, which is then spread out across the instance pool.

This is probably a good time to mention that ECS still does not provide abstraction across container infrastructure. There is a bit of automation to help with this, but ECS2 instances still need to be manually spun up and added to the ECS pool, from which it then derives the available amount of resources and how it distributes the containers. One would assume that there are plans to eliminate the visibility of the EC2 layer from the user – it seems like Amazon is aware of requests to do this, as was mentioned when I asked the question.

Ultimately, it looks like there is still some work to do to make ECS completely integrated. Auto-scaling is another feature that is still a manual process. For now, there are documented examples on how to do things like glue auto-scaling together using Lambda, for example.

Wrap-Up

You can see all – or at least most – of the presentations and breakouts that happened at the summit (and plenty of other conferences) – on the AWS YouTube page.

And as for me, expect to see, over the next several weeks, a product-by-product review of as much of AWS as I can eat. I will be starting with some of the products that were discussed above, with a mix of some of the old standbys in as well to ensure I cover things from the ground up. Watch this space!