September 2010 Archives

The Ubuntu 10.04 Lucid AMis for Amazon EC2 dated 20100923 have a known bug which causes the mountall process to spin CPU when the instance is rebooted.

You can observe this by starting a Lucid instance, running sudo reboot, and then running top after reconnecting.

Cpu(s): 38.5%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si, 61.5%st
PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 49 root      20   0  4128 1180  920 R 38.6  0.1   0:08.57 mountall

You’ll see that mountall is using all available CPU. The top command may show this as a low number like 38%, but if you also look at the %st or “percent steal”, you’ll see that the VM host is claiming the remaining real CPU cycles as reserved for other instances on that hardware, resulting in zero percent idle for your instance.

This problem comes from a minor change made to /etc/fstab in the latest AMIs along with an esoteric bug in mountall. Looking at /etc/fstab you’ll see the line:

/dev/sda2       /mnt    auto    defaults,nobootwait,comment=cloudconfig 0       0

The bug manifests itself because “nobootwait” is not the last option in the list. Until it is fixed, as a temporary workaround, we can remedy this with a command like:

sudo perl -pi -e 's/(nobootwait),(\S+)/$2,$1/' /etc/fstab

resulting in the new line looking like:

/dev/sda2       /mnt    auto    defaults,comment=cloudconfig,nobootwait 0       0

Reboot again, and you’ll see that mountall is now behaving itself.

This fix only needs to be applied once per instance of the current Lucid AMI.

Don’t forget to terminate the instance if you were just following along to test the procedure.

Eventually, we should see the mountall package fixed in Ubuntu 10.04 Lucid, at which point a simple apt update/upgrade should fix it for new instances. Then, when Ubuntu publishes new Ubuntu 10.04 AMIs for EC2 we won’t have to worry about this workaround ever again.

To follow the progress of the fix for this defect, subscribe to the Launchpad bug #649591.

Thanks to Simon de Boer for alerting folks to the problem on EC2, Scott Moser for submitting the bug report (and publishing updated AMIs), and Colin Watson for making sure the bug gets squashed in mountall in Ubuntu.

Canonical has released an updated series of Ubuntu AMIs for EC2. When starting new EC2 instances, you should use the latest AMI ids to pick up kernel security fixes. If you have Ubuntu 10.04 running on a t1.micro instance type, you should at least upgrade the software packages to get the patch for the rebooting issue:

sudo apt-get update &&
sudo apt-get upgrade

For your convenience, the table at the top of Alestic.com automatically displays the latest AMIs published by Canonical, as well as the AMIs that I published for older versions of Ubuntu.

I frequently fire up a temporary Ubuntu server on Amazon EC2 to test out some package feature, installation process, or other capability where I’m willing to pay a few pennies for a clean install and spare CPU.

I occasionally forget that I started an instance and leave it running for longer than I intended, turning my decision to spend ten cents into a cost of dollars. In one case, I ended up paying several hundred dollars for a super-sized instance I forgot I had running. Yes, ouch.

Because of this pain, I have a habit now of pre-scheduling the termination of my temporary instances immediately after creating them. I used to do this on the instance itself with a command like:

echo "sudo halt" | at now + 55 min

However, this only terminates the instance if its root disk is instance-store (S3 based AMI). I generally run EBS boot instances now, and a shutdown or halt only “stops” an EBS boot instance by default which leaves you paying for the EBS boot volume at, say, $1.50/month.

So, my common practice these days is to pre-schedule an instance termination call, generally from my local laptop, using a command like:

echo "ec2kill i-eb89bb81" | at now + 55 min

The at utility runs the commands on stdin with the exact same environment ($PATH, EC2 keys, current working directory, umask, etc.) as are in the current shell. There are a number of different ways to specify different times for the schedule and little documentation, but it will notify you when it plans to run the commands so you can check it.

After the command is run, at returns the output through an email. This gives you an indication of whether or not the terminate succeeded or if you will need to follow up manually.

Here’s an example email I got from the above at command:

Subject: Output from your job      114
Date: Mon, 20 Sep 2010 14:01:05 -0700 (PDT)

INSTANCE    i-eb89bb81  running shutting-down

I already have a personal custom command which starts an instance, waits for it to move to running, waits for the ssh server to accept connections, then connects with ssh. I think I’ll add a --temporary option to do this termination scheduling for me as well.

You can get a list of the currently scheduled at jobs with

at -l

You can see the commands in a specific job id with:

at -c [JOBID]

If you decide along the way that the instance should not be temporary and you want to cancel the scheduled termination, you can delete a given at job with a command like:

at -d [JOBID]

I’ve been thinking of writing something simple that would regularly monitor my AWS/EC2 resources (instances, EBS volumes, EBS snapshots, AMIs, etc.) and alert me if it detects patterns that may indicate I am spending money where I may not have intended to.

How do you monitor and clean up temporary resources on Amazon AWS/EC2?

A few hours ago, Amazon launched a public preview of AWS Identity and Access Management (IAM) which is a powerful feature if you have a number of developers who need to access and to manage resources for an AWS account. A unique IAM user can be created for each developer and specific permissions can be doled out as needed.

You can also create IAM users for system functions, dramatically increasing the security of your AWS account in the event a server is compromised. That benefit is the focus of this article using an example frequently cited by EC2 users: Automating EBS snapshots on a local EC2 instance without putting the keys to your AWS kingdom on the file system.

Before the release of AWS IAM, if you wanted to create EBS snapshots in a local cron job on an EC2 instance, you needed to put the master AWS credentials in the file system on that instance. If those AWS credentials were compromised, the attacker could perform all sorts of havoc with resources in your AWS account and charges to your credit card.

With the launch of AWS IAM, we can create a system IAM user with its own AWS keys and all it is allowed to do is… create EBS snapshots! These keys are placed on the instance and used in the snapshot cron job. Now, an attacker can do very little damage with those keys if they are compromised, and we all feel much safer.

The AWS IAM documentation is required reading and a great reference. This article is only intended to serve as a practical introduction to one simple application of IAM.

These instructions assume you are running Ubuntu 10.04 (Lucid) on both your local system and on Amazon EC2. Adjust as appropriate for other distributions and releases.

IAM Installation

Ubuntu does not yet have an official software package for AWS IAM, so we need to download the IAM command line toolkit from Amazon. This can be done on any machine including your local desktop. The IAM command line tools require Java so we need to make sure that is installed as well.

Eventually, you’ll want to install this software somewhere more permanent, but for this demo, we’ll just use it from a subdirectory.

sudo apt-get install openjdk-6-jre unzip
export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
wget http://awsiammedia.s3.amazonaws.com/public/tools/cli/latest/IAMCli.zip
unzip IAMCli.zip
export AWS_IAM_HOME=$(echo $(pwd)/IAMCli-*)
export PATH=$PATH:$AWS_IAM_HOME/bin

The AWS IAM tools require you to save your AWS account’s main access key id and AWS secret access key in yet another file format. Create this AWS credential file as, say, $HOME/.aws-credentials-master.txt in the following format (replacing the values with your own credentials):

AWSAccessKeyId=YOURACCESSKEYIDHERE
AWSSecretKey=YOURSECRETKEYHERE

Note: The above is the sample content of a file you are creating, and not shell commands to run.

Protect the above file and set an environment variable to tell IAM where to find it:

export AWS_CREDENTIAL_FILE=$HOME/.aws-credentials-master.txt
chmod 600 $AWS_CREDENTIAL_FILE

We can now use the iam-* command line tools to create and manage AWS IAM users, groups, and policies.

Create IAM User

How you manage your users and groups is sure to be a personal preference that is fine tuned over time, but for the purposes of this demo, I’ll propose that for tracking purposes we put non-human users into a new group named “system”.

iam-groupcreate -g system

Create the snapshotter system user, saving the keys to a file:

user=snapshotter
iam-usercreate -u $user -g system -k |
  tee $HOME/.aws-keys-$user.txt
chmod 600 $HOME/.aws-keys-$user.txt

You will want to have this snapshotter keys file on the EC2 instance, so copy it there:

rsync -Paz $HOME/.aws-keys-$user.txt REMOTEUSER@REMOTESYSTEM:

Allow IAM user snapshotter to create EBS snapshots of any EBS volume:

iam-useraddpolicy   -p allow-create-snapshot   -e Allow   -u $user   -a ec2:CreateSnapshot   -r '*'

There’s a lot of preparatory and other commands in this article, but take a second to focus on the fact that the core, functional steps are simply the iam-usercreate and iam-useraddpolicy commands above. Two commands and you have a new AWS IAM user with restricted access to your AWS account.

Create EBS Snapshot

For the purposes of this demo, we’ll assume you’re using the ec2-consistent-snapshot tool to create EBS snapshots with a consistent file system and perhaps a consistent MySQL database. (If you’re not using this tool, then you could have simply used ec2-create-snapshot from any computer without having to go through the trouble of creating a new IAM user.)

Make sure you have the latest ec2-consistent-snapshot software installed on the EC2 instance:

sudo add-apt-repository ppa:alestic/ppa
sudo apt-get install ec2-consistent-snapshot

Create the snapshot on the EC2 instance. Adjust options to fit your local EBS volume mount points and MySQL database setup.

sudo ec2-consistent-snapshot   --aws-credentials-file $HOME/.aws-keys-snapshotter.txt   --xfs-filesystem /YOURMOUNTPOINT   YOURVOLUMEID

Follow similar steps to create users and set policies for other system activities you perform on your EC2 instances. IAM can control access to many different AWS resource types, API calls, specific resources, and has even more fine tuned control parameters including time-based restrictions.

The release of AWS Identity and Access Management alleviates one of the biggest concerns security-conscious folks used to have when they started using AWS with a single key that gave complete access and control over all resources. Now the control is entirely in your hands.

Cleanup

If you have followed the steps in this demo and you wish to undo most of what was done, here are some steps for reference.

Delete the IAM user and the IAM group:

iam-userdel -u $user -r
iam-groupdel -g system

Wipe the credentials and keys files and remove the downloaded and unzipped IAM command line toolkit:

sudo apt-get install wipe
wipe  $HOME/.aws-credentials-master.txt       $HOME/.aws-keys-$user.txt
rm    IAMCli.zip
rm -r $AWS_IAM_HOME

Make sure to wipe the snapshotter key file on the remote EC2 instance as well.

Support

If you’re looking for help with AWS IAM, there is a new AWS IAM forum dedicated to the topic.

[Update 2010-11-19: Fix path where new zip file is expanded]

Ubuntu AMIs

Ubuntu AMIs for EC2:


AWS Jobs

AWS Jobs