Installing aws-cli, the New AWS Command Line Tool

consistent control over more AWS services with aws-cli, a single, powerful command line tool from Amazon

Readers of this tech blog know that I am a fan of the power of the command line. I enjoy presenting functional command line examples that can be copied and pasted to experience services and features.

The Old World

Users of the various AWS legacy command line tools know that, though they get the job done, they are often inconsistent in where you get them, how you install them, how you pass options, how you provide credentials, and more. Plus, there are only tool sets for a limited number of AWS services.

I wrote an article that demonstrated the simplest approach I use to install and configure the legacy AWS command line tools, and it ended up being extraordinarily long.

I’ve been using the term “legacy” when referring to the various old AWS command line tools, which must mean that there is something to replace them, right?

The New World

The future of the AWS command line tools is aws-cli, a single, unified, consistent command line tool that works with almost all of the AWS services.

Using An AWS CloudFormation Stack To Allow "-" Instead Of "+" In Gmail Email Addresses

Launch a CloudFormation template to set up a stack of AWS resources to fill a simple need: Supporting Gmail addresses with “-” instead of “+” separating the user name from the arbitrary tag strings.

The CloudFormation stack launched by the template consists of:

  • ELB (Elastic Load Balancer)
  • Auto Scaling Group
  • EC2 instance(s) running Postfix on Ubuntu set up by a user-data script
  • Security Group allowing ELB to connect to the instances
  • CloudWatch CPU high/low alarms
  • Auto Scaling scale up/down policies.
  • SNS (Simple Notification Service) topic for notification of Auto Scaling events
  • Route53 Record Set

This basic stack structure can be used as a solution for a large number of different needs, but in this example it is set up as an SMTP email relay that filters and translates email addresses for Google Apps for Business customers.

Because it uses Auto Scaling, ELB, and Route53, it is scalable and able to recover from various types of failures.

If you’re in a rush to see code, you can look at the CloudFormation template and the initialization script run from the user-data script.

Now, let’s look a bit more in depth at the problem this is solving and how to set up the solution.

New Options In ec2-expire-snapshots v0.11

The ec2-expire-snapshots program can be used to expire EBS snapshots in Amazon EC2 on a regular schedule that you define. It can be used as a companion to ec2-consistent-snapshot or independently.

There have been two recent submissions to the code from the community that provide new command line options in the latest version (v0.11) of ec2-expire-snapshots.

  1. Wayne Robinson discovered that EC2 sometimes limits the rate at which you can delete snapshots, and submitted code for a new --delete-delay option that tells ec2-expire-snapshots to pause for N seconds between each EBS snapshot deletion.

  2. Anthony Tonns uses EC2’s new feature to copy EBS snapshots from one region to another for redundancy, and found that Amazon does not associate snapshots from the same EBS volume in the source region with the same source volume in the target region. Anthony came up with the idea of putting the source volume id in a tag and submitted code for a new --volume-id-in-tag option that lets you specify the tag name.

Thanks also to varunwy for submitting a patch a while back to clean up some dependencies in the package installation.

On Ubuntu, you can install ec2-expire-snapshots from the Alestic PPA using:

Email Alerts for AWS Billing Alarms

using CloudWatch and SNS to send yourself email messages when AWS costs accrue past limits you define

The Amazon documentation describes how to use the AWS console to monitor your estimated charges using Amazon CloudWatch and includes some pointers for folks using the command line. Unfortunately, that article leaves out the commands to set up the SNS (Simple Notification Service) topics and SNS subscriptions, so I present here the complete steps I use.

I like using the command line tools as they let me automate and repeat actions without having to do lots of pointing, clicking, and re-entering data. For example, I want to set up a number of billing alerts in each new account, sometimes at $10 increments, and sometimes at $100 or $1000 increments. The steps below let me do this in seconds with a simple copy and paste.

Cost of Transitioning S3 Objects to Glacier

how I was surprised by a large AWS charge and how to calculate the break-even point

Glacier Archival of S3 Objects

Amazon recently introduced a fantastic new feature where S3 objects can be automatically migrated over to Glacier storage based on the S3 bucket, the key prefix, and the number of days after object creation.

This makes it trivially easy to drop files in S3, have fast access to them for a while, then have them automatically saved to long-term storage where they can’t be accessed as quickly, but where the storage charges are around a tenth of the price.

…or so I thought.

Running Ubuntu on Amazon EC2 in Sydney, Australia

Amazon has announced a new AWS region in Sydney, Australia with the name ap-southeast-2.

The official Ubuntu AMI lookup pages (1, 2) don’t seem to be showing the new location yet, but the official Ubuntu AMI query API does seem to be working, so the new ap-southeast-2 Ubuntu AMIs are available for lookup on Alestic.com.

[Update 2012-11-13: Canonical has fixed the primary Ubuntu AMI lookup page and I understand it should remain more up to date going forward, but the other page is still missing ap-southeast-2]

Point and Click

At the top right of most pages on Alestic.com is an “Ubuntu AMIs” section. Simply select the EC2 region from the pulldown (say “ap-southeast-2” for Sydney, Australia) and you will see a list of the official 64-bit Ubuntu AMI ids for the various active Ubuntu releases.

Save Money by Giving Away Unused Heavy Utilization Reserved Instances

You may be able to save on future EC2 expenses by selling an unused Reserved Instance for less than its true value or even $0.01, provided it is in the “Heavy Utilization” class.

In the description of the Heavy Utilization Reserved Instance, is this statement:

you pay […] a significantly lower hourly usage fee, and you’re charged that lower hourly rate for every hour in the Reserved Instance term you purchase [emphasis added]

What may not be clear to the casual reader is the fact that when you purchase a Heavy Utilization Reserved Instance, you commit not only to paying the one-time up front cost, but you are also committing to paying the hourly charge for every hour of every month, even if you are not running a matching instance!

The Light Utilization and Medium Utilization descriptions state:

Installing AWS Command Line Tools from Amazon Downloads
This article describes how to install the old generation of AWS command line tools. For the most part, these have been replaced with the new AWS cli that is easier to install and more comprehensive:

When you need an AWS command line toolset not provided by Ubuntu packages, you can download the tools directly from Amazon and install them locally.

In a previous article I provided instructions on how to install AWS command line tools using Ubuntu packages. That method is slightly easier to set up and easier to upgrade when Ubuntu releases updates. However, the Ubuntu packages aren’t always up to date with the latest from Amazon and there are not yet Ubuntu packages published for every AWS command line tools you might want to use.

Unfortunately, Amazon does not have one single place where you can download all the command line tools for the various services, nor are all of the tools installed in the same way, nor do they all use the same format for accessing the AWS credentials.

The following steps show how I install and configure the AWS command line tools provided by Amazon when I don’t use the packages provided by Ubuntu.

Convert Running EC2 Instance to EBS-Optimized Instance with Provisioned IOPS EBS Volumes

Amazon just announced two related features for getting super-fast, consistent performance with EBS volumes: (1) Provisioned IOPS EBS volumes, and (2) EBS-Optimized Instances.

Starting new instances and EBS volumes with these features is fine, but what if you already have some running instances you’d like to upgrade for faster and more consistent disk performance?

Given the two AWS features, there are two separate powers that need to be engaged to take full advantage:

  1. Convert the EBS volume(s) from standard EBS volumes into new Provisioned IOPS EBS volume(s).

  2. Convert the standard EC2 instance into an EBS-Optimized instance.

This article demonstrates how to take an existing EBS boot instance that is already running and convert it to use both of these two EBS performance features. Note that there will be some increased costs; please study Amazon’s published pricing before attempting.

Which EC2 Availability Zone is Affected by an Outage?

Did you know that Amazon includes status messages about the health of availability zones in the output of the ec2-describe-availability-zones command, the associated API call, and the AWS console?

Right now, Amazon is restoring power to a “large number of instances” in one availability zone in the us-east-1 region due to “electrical storms in the area”.

Since the names used for specific availability zones differ between AWS accounts, Amazon can’t just say that the affected zone is us-east-1c as it might be us-east-1e in another account.

During this outage, you can find out what the name of the affected availability zone is in your AWS account by running this command (installation instructions):

Installing AWS Command Line Tools Using Ubuntu Packages

See also: Installing AWS Command Line Tools from Amazon Downloads

Here are the steps for installing the AWS command line tools that are currently available as Ubuntu packages. These include:

  • EC2 API tools
  • EC2 AMI tools
  • IAM - Identity and Access Management
  • RDS - Relational Database Service
  • CloudWatch
  • Auto Scaling
  • ElastiCache

Starting with Ubuntu 12.04 LTS Precise, these are also available:

  • CloudFormation
  • ELB - Elastic Load Balancer

Install Packages

Ubuntu Developer Summit, May 2012 (Oakland)

I will be attending the Ubuntu Developer Summit (UDS) next week in Oakland, CA.  This event brings people from around the world together in one place every six months to discuss and plan for the next release of Ubuntu.  The May 2012 UDS is for Ubuntu-Q which will eventually be named and become Ubuntu 12.10 when it is released in October (2012-10).

Uploading Known ssh Host Key in EC2 user-data Script

The ssh protocol uses two different keys to keep you secure:

  1. The user ssh key is the one we normally think of. This authenticates us to the remote host, proving that we are who we say we are and allowing us to log in.

  2. The ssh host key gets less attention, but is also important. This authenticates the remote host to our local computer and proves that the ssh session is encrypted so that nobody can be listening in.

Every time you see a prompt like the following, ssh is checking the host key and asking you to make sure that your session is going to be encrypted securely.

The authenticity of host 'ec2-...' can't be established.
ECDSA key fingerprint is ca:79:72:ea:23:94:5e:f5:f0:b8:c0:5a:17:8c:6f:a8.
Are you sure you want to continue connecting (yes/no)? 

If you answer “yes” without verifying that the remote ssh host key fingerprint is the same, then you are basically saying:

I don’t need this ssh session encrypted. It’s fine for any man-in-the-middle to intercept the communication.

Ouch! (But a lot of people do this.)

Seeding Torrents with Amazon S3 and s3cmd on Ubuntu

Amazon Web Services is such a huge, complex service with so many products and features that sometimes very simple but powerful features fall through the cracks when you’re reading the extensive documentation.

One of these features, which has been around for a very long time, is the ability to use AWS to seed (serve) downloadable files using the BitTorrentâ„¢ protocol. You don’t need to run EC2 instances and set up software. In fact, you don’t need to do anything except upload your files to S3 and make them publicly available.

Any file available for normal HTTP download in S3 is also available for download through a torrent. All you need to do is append the string ?torrent to the end of the URL and Amazon S3 takes care of the rest.

Steps

Let’s walk through uploading a file to S3 and accessing it with a torrent client using Ubuntu as our local system. This approach uses s3cmd to upload the file to S3, but any other S3 software can get the job done, too.

Use the Same Architecture (64-bit) on All EC2 Instance Types

A few hours ago, Amazon AWS announced that all EC2 instance types can now run 64-bit AMIs.

Though t1.micro, m1.small, and c1.medium will continue to also support 32-bit AMIs, it is my opinion that there is virtually no reason to use 32-bit instances on EC2 any more.

This is fantastic news!