New c3.* Instance Types on Amazon EC2 - Nice!

Worth switching.

Amazon shared that the new c3.* instance types have been in high demand on EC2 since they were released.

I finally had a minute to take a look at the specs for the c3.* instances which were just announced at AWS re:Invent, and it is obvious why they are popular and why they should probably be even more popular than they are.

Let’s just take a look at the cheapest of these, the c3.large, and compare it to the older generation c1.medium, which is similar in price:

Query EC2 Account Limits with AWS API

Here’s a useful tip mentioned in one of the sessions at AWS re:Invent this year.

There is a little known API call that lets you query some of the EC2 limits/attributes in your account. The API call is DescribeAccountAttributes and you can use the aws-cli to query it from the command line.

For full JSON output:

aws ec2 describe-account-attributes

To query select limits/attributes and output them in a handy table format:

Using aws-cli --query Option To Simplify Output

My favorite session at AWS re:Invent was James Saryerwinnie’s clear, concise, and informative tour of the aws-cli (command line interface), which according to GitHub logs he is enhancing like crazy.

I just learned about a recent addition to aws-cli: The --query option lets you specify what parts of the response data structure you want output.

Instead of wading through pages of JSON output, you can select a few specific values and output them as JSON, table, or simple text. The new --query option is far easier to use than jq, grep+cut, or Perl, my other fallback tools for parsing the output.

aws --query Examples

The following sample aws-cli commands use the --query and --output options to extract the desired output fields so that we can assign them to shell variables:

Reset S3 Object Timestamp for Bucket Lifecycle Expiration

use aws-cli to extend expiration and restart the delete or archive countdown on objects in an S3 bucket

Background

S3 buckets allow you to specify lifecycle rules that tell AWS to automatically delete or archive any objects in that bucket after a specific number of days. You can also specify a prefix with each rule so that different objects in the same bucket stay for different amounts of time.

Example 1: I created a bucket named logs.example.com (not the real name) that automatically archives an object to AWS Glacier after it has been sitting in S3 for 90 days.

Example 2: I created a bucket named tmp.example.com (not the real name) that automatically delete a file after it has been sitting there for 30 days.

This works great until you realize that there are specific files that you want to keep around for just a bit longer than its original expiration.

You could download and then upload the object to reset its creation date, thus starting the countdown from zero; but through a little experimentation, I found that one easy way to reset the creation/modification timestamp on an S3 object is to ask S3 to change the object storage method to the same storage method it currently has.

The following example uses the new aws-cli command line tool to reset the timestamp of an S3 object, thus restarting the lifecycle counter. This has an effect similar to the Linux/Unix touch command.

Installing aws-cli, the New AWS Command Line Tool

consistent control over more AWS services with aws-cli, a single, powerful command line tool from Amazon

Readers of this tech blog know that I am a fan of the power of the command line. I enjoy presenting functional command line examples that can be copied and pasted to experience services and features.

The Old World

Users of the various AWS legacy command line tools know that, though they get the job done, they are often inconsistent in where you get them, how you install them, how you pass options, how you provide credentials, and more. Plus, there are only tool sets for a limited number of AWS services.

I wrote an article that demonstrated the simplest approach I use to install and configure the legacy AWS command line tools, and it ended up being extraordinarily long.

I’ve been using the term “legacy” when referring to the various old AWS command line tools, which must mean that there is something to replace them, right?

The New World

The future of the AWS command line tools is aws-cli, a single, unified, consistent command line tool that works with almost all of the AWS services.

Using An AWS CloudFormation Stack To Allow "-" Instead Of "+" In Gmail Email Addresses

Launch a CloudFormation template to set up a stack of AWS resources to fill a simple need: Supporting Gmail addresses with “-” instead of “+” separating the user name from the arbitrary tag strings.

The CloudFormation stack launched by the template consists of:

  • ELB (Elastic Load Balancer)
  • Auto Scaling Group
  • EC2 instance(s) running Postfix on Ubuntu set up by a user-data script
  • Security Group allowing ELB to connect to the instances
  • CloudWatch CPU high/low alarms
  • Auto Scaling scale up/down policies.
  • SNS (Simple Notification Service) topic for notification of Auto Scaling events
  • Route53 Record Set

This basic stack structure can be used as a solution for a large number of different needs, but in this example it is set up as an SMTP email relay that filters and translates email addresses for Google Apps for Business customers.

Because it uses Auto Scaling, ELB, and Route53, it is scalable and able to recover from various types of failures.

If you’re in a rush to see code, you can look at the CloudFormation template and the initialization script run from the user-data script.

Now, let’s look a bit more in depth at the problem this is solving and how to set up the solution.

New Options In ec2-expire-snapshots v0.11

The ec2-expire-snapshots program can be used to expire EBS snapshots in Amazon EC2 on a regular schedule that you define. It can be used as a companion to ec2-consistent-snapshot or independently.

There have been two recent submissions to the code from the community that provide new command line options in the latest version (v0.11) of ec2-expire-snapshots.

  1. Wayne Robinson discovered that EC2 sometimes limits the rate at which you can delete snapshots, and submitted code for a new --delete-delay option that tells ec2-expire-snapshots to pause for N seconds between each EBS snapshot deletion.

  2. Anthony Tonns uses EC2’s new feature to copy EBS snapshots from one region to another for redundancy, and found that Amazon does not associate snapshots from the same EBS volume in the source region with the same source volume in the target region. Anthony came up with the idea of putting the source volume id in a tag and submitted code for a new --volume-id-in-tag option that lets you specify the tag name.

Thanks also to varunwy for submitting a patch a while back to clean up some dependencies in the package installation.

On Ubuntu, you can install ec2-expire-snapshots from the Alestic PPA using:

Replacing a CloudFront Distribution to "Invalidate" All Objects

I was chatting with Kevin Boyd (aka Beryllium) on the ##aws Freenode IRC channel about the challenge of invalidating a large number of CloudFront objects (35,000) due to a problem where the cached copies of the objects were out of date and the system had not been designed with versioning in the object path or name.

In addition to the work to perform all of these invalidations (in batches of up to 1,000 in each request with at most 3 request outstanding) there is also the issue of cost. The first thousand CloudFront invalidations are free in a month, but the remainder of the invalidations in this case would cost $170 (at $0.005 for each object).

It occurred to me that one could take advantage of the on-demand nature of AWS by using the following approach:

Email Alerts for AWS Billing Alarms

using CloudWatch and SNS to send yourself email messages when AWS costs accrue past limits you define

The Amazon documentation describes how to use the AWS console to monitor your estimated charges using Amazon CloudWatch and includes some pointers for folks using the command line. Unfortunately, that article leaves out the commands to set up the SNS (Simple Notification Service) topics and SNS subscriptions, so I present here the complete steps I use.

I like using the command line tools as they let me automate and repeat actions without having to do lots of pointing, clicking, and re-entering data. For example, I want to set up a number of billing alerts in each new account, sometimes at $10 increments, and sometimes at $100 or $1000 increments. The steps below let me do this in seconds with a simple copy and paste.

Cost of Transitioning S3 Objects to Glacier

how I was surprised by a large AWS charge and how to calculate the break-even point

Glacier Archival of S3 Objects

Amazon recently introduced a fantastic new feature where S3 objects can be automatically migrated over to Glacier storage based on the S3 bucket, the key prefix, and the number of days after object creation.

This makes it trivially easy to drop files in S3, have fast access to them for a while, then have them automatically saved to long-term storage where they can’t be accessed as quickly, but where the storage charges are around a tenth of the price.

…or so I thought.

Running Ubuntu on Amazon EC2 in Sydney, Australia

Amazon has announced a new AWS region in Sydney, Australia with the name ap-southeast-2.

The official Ubuntu AMI lookup pages (1, 2) don’t seem to be showing the new location yet, but the official Ubuntu AMI query API does seem to be working, so the new ap-southeast-2 Ubuntu AMIs are available for lookup on Alestic.com.

[Update 2012-11-13: Canonical has fixed the primary Ubuntu AMI lookup page and I understand it should remain more up to date going forward, but the other page is still missing ap-southeast-2]

Point and Click

At the top right of most pages on Alestic.com is an “Ubuntu AMIs” section. Simply select the EC2 region from the pulldown (say “ap-southeast-2” for Sydney, Australia) and you will see a list of the official 64-bit Ubuntu AMI ids for the various active Ubuntu releases.

Save Money by Giving Away Unused Heavy Utilization Reserved Instances

You may be able to save on future EC2 expenses by selling an unused Reserved Instance for less than its true value or even $0.01, provided it is in the “Heavy Utilization” class.

In the description of the Heavy Utilization Reserved Instance, is this statement:

you pay […] a significantly lower hourly usage fee, and you’re charged that lower hourly rate for every hour in the Reserved Instance term you purchase [emphasis added]

What may not be clear to the casual reader is the fact that when you purchase a Heavy Utilization Reserved Instance, you commit not only to paying the one-time up front cost, but you are also committing to paying the hourly charge for every hour of every month, even if you are not running a matching instance!

The Light Utilization and Medium Utilization descriptions state:

Installing AWS Command Line Tools from Amazon Downloads
This article describes how to install the old generation of AWS command line tools. For the most part, these have been replaced with the new AWS cli that is easier to install and more comprehensive:

When you need an AWS command line toolset not provided by Ubuntu packages, you can download the tools directly from Amazon and install them locally.

In a previous article I provided instructions on how to install AWS command line tools using Ubuntu packages. That method is slightly easier to set up and easier to upgrade when Ubuntu releases updates. However, the Ubuntu packages aren’t always up to date with the latest from Amazon and there are not yet Ubuntu packages published for every AWS command line tools you might want to use.

Unfortunately, Amazon does not have one single place where you can download all the command line tools for the various services, nor are all of the tools installed in the same way, nor do they all use the same format for accessing the AWS credentials.

The following steps show how I install and configure the AWS command line tools provided by Amazon when I don’t use the packages provided by Ubuntu.

Convert Running EC2 Instance to EBS-Optimized Instance with Provisioned IOPS EBS Volumes

Amazon just announced two related features for getting super-fast, consistent performance with EBS volumes: (1) Provisioned IOPS EBS volumes, and (2) EBS-Optimized Instances.

Starting new instances and EBS volumes with these features is fine, but what if you already have some running instances you’d like to upgrade for faster and more consistent disk performance?

Given the two AWS features, there are two separate powers that need to be engaged to take full advantage:

  1. Convert the EBS volume(s) from standard EBS volumes into new Provisioned IOPS EBS volume(s).

  2. Convert the standard EC2 instance into an EBS-Optimized instance.

This article demonstrates how to take an existing EBS boot instance that is already running and convert it to use both of these two EBS performance features. Note that there will be some increased costs; please study Amazon’s published pricing before attempting.

Which EC2 Availability Zone is Affected by an Outage?

Did you know that Amazon includes status messages about the health of availability zones in the output of the ec2-describe-availability-zones command, the associated API call, and the AWS console?

Right now, Amazon is restoring power to a “large number of instances” in one availability zone in the us-east-1 region due to “electrical storms in the area”.

Since the names used for specific availability zones differ between AWS accounts, Amazon can’t just say that the affected zone is us-east-1c as it might be us-east-1e in another account.

During this outage, you can find out what the name of the affected availability zone is in your AWS account by running this command (installation instructions):