EBS-SSD Boot AMIs For Ubuntu On Amazon EC2

With Amazon’s announcement that SSD is now available for EBS volumes, they have also declared this the recommended EBS volume type.

The good folks at Canonical are now building Ubuntu AMIs with EBS-SSD boot volumes. In my preliminary tests, running EBS-SSD boot AMIs instead of EBS magnetic boot AMIs speeds up the instance boot time by approximately… a lot.

Canonical now publishes a wide variety of Ubuntu AMIs including:

  • 64-bit, 32-bit
  • EBS-SSD, EBS-SSD pIOPS, EBS-magnetic, instance-store
  • PV, HVM
  • in every EC2 region
  • for every active Ubuntu release

Matrix that out for reasonable combinations and you get 492 AMIs actively supported today.

You Should Use EBS Boot Instances on Amazon EC2

EBS boot vs. instance-store

If you are just getting started with Amazon EC2, then use EBS boot instances and stop reading this article. Forget that you ever heard about instance-store and accept my apology that I just mentioned it. Once you are completely comfortable with using EBS boot instances on EC2, you may (or may not) want to come back here and read why you made a good decision.

EC2 experts may find that there are specific cases, few and far between, where instance-store might make sense, but they don’t attempt to use instance-store without understanding and accounting for all the serious drawbacks and dangers that go with making this choice. For example, experts using instance-store don’t mind losing all of the data on the instance as they have designed the system so that the data is stored elsewhere and so that a new instance can easily and automatically be rebuilt from scratch.

One of the challenges for beginners is that many of the benefits of EBS boot don’t necessarily seem like something you’ll need to use right away. Then they get down the road and into situations where they realize that they would have been much better off if they had gone with EBS boot in the first place and may find it takes some work to make the transition.

Big benefits of EBS boot instances

Here are some of the reasons I use and recommend EBS boot instances. None of these benefits are available with instance-store, so even a single one of these can be an overriding factor for choosing EBS boot.

Updated EBS boot AMIs for Ubuntu 8.04 Hardy on Amazon EC2 (2011-10-06)

Canonical has released updated instance-store AMIs for Ubuntu 8.04 LTS Hardy on Amazon EC2. Read Ben Howard’s announcement on the ec2ubuntu Google group.

Rebooting vs. Stop/Start of Amazon EC2 Instance

When you reboot a physical computer at your desk it is very similar to shutting down the system, and booting it back up. With Amazon EC2, rebooting an instance is much the same as with a local physical computer, but a stop/start differs in a few keys ways that may cause some problems and definitely have some benefits.

When you stop an EBS boot instance you are giving up the physical hardware that the server was running on and EC2 is free to start somebody else’s instance there.

Your EBS boot volume (and other attached EBS volumes) are still preserved, though they aren’t really tied to a physical or virtual server. They are just associated with an instance id that isn’t running anywhere.

When you start the instance again, EC2 picks some hardware to run it on, ties in the EBS volume(s) and boots it up again.

Things that change when you stop/start include:

Updated EBS boot AMIs for Ubuntu 8.04 Hardy on Amazon EC2

For folks still using the old, reliable Ubuntu 8.04 LTS Hardy from 2008, Canonical has released updated AMIs for use on Amazon EC2. Read Scott Moser’s announcement on the ec2ubuntu Google group.

Though Canonical publishes both EBS boot and instance-store for recent Ubuntu releases, they only publish instance-store AMIs for the older Ubuntu 8.04, so…

My Experience With the EC2 Judgment Day Outage

Amazon designs availability zones so that it is extremely unlikely that a single failure will take out multiple zones at once. Unfortunately, whatever happened last night seemed to cause problems in all us-east-1 zones for one of my accounts.

Of the 20+ full time EC2 instances that I am responsible for (including my startup and my personal servers) only one instance was affected. As it turns out it was the one that hosts Alestic.com along with some other services that are important to me personally.

Here are some steps I took in response to the outage:

Fixing Files on the Root EBS Volume of an EC2 Instance

You can examine and edit files on the root EBS volume on an EC2 instance even if you are in what you considered a disastrous situation like:

  • You lost your ssh key or forgot your password

  • You made a mistake editing the /etc/sudoers file and can no longer gain root access with sudo to fix it

  • Your long running instance is hung for some reason, cannot be contacted, and fails to boot properly

  • You need to recover files off of the instance but cannot get to it

On a physical computer sitting at your desk, you could simply boot the system with a CD or USB stick, mount the hard drive, check out and fix the files, then reboot the computer to be back in business.

A remote EC2 instance, however, seems distant and inaccessible when you are in one of these situations. Fortunately, AWS provides us with the power and flexibility to be able to recover a system like this, provided that we are running EBS boot instances and not instance-store.

The approach on EC2 is somewhat similar to the physical solution, but we’re going to move and mount the faulty “hard drive” (root EBS volume) to a different instance, fix it, then move it back.

In some situations, it might simply be easier to start a new EC2 instance and throw away the bad one, but if you really want to fix your files, here is the approach that has worked for many:

A Simpler Way To Replace Instance Hardware on EC2

A while back I wrote an article describing a way to move the root EBS volume from one running instance to another. I pitched this as a way to replace the hardware for your instance in the event of failures.

Since then, I have come to the realization that there is a much simpler method to move your instance to new hardware, and I have been using this new method for months when I run into issues that I suspect might be attributed to underlying hardware issues.

This method is so simple, that I am almost embarrassed about having written the previous article, but I’ll point out below at least one benefit that still exists with the more complicated approach.

I now use this process as the second step–after a simple reboot–when I am experiencing odd problems like not being able to connect to a long running EC2 instance. (The zeroth step is to start running and setting up a replacement instance in the event that steps one and two do not produce the desired results.)

Here goes…

Copying EBS Boot AMIs Between EC2 Regions

Update: Since this article was written, Amazon has released the ability to copy EBS boot AMIs between regions using the web console, command line, and API. You may still find information of use in this article, but Amazon has solved some of the harder parts for you.

Using Amazon EC2, you created an EBS boot AMI and it’s working fine, but now you want to run instances of that AMI in a different EC2 region. Since AMIs are region specific, you need a copy of the image in each region where instances are required.

This article presents one method you can use to copy an EBS boot from one EC2 region to another.

Move a Running EBS Boot Instance to New Hardware on Amazon EC2

NOTE: Though this method works and there is useful information in this article about things you can do with EBS boot instances, there is a simpler way to move an instance to new hardware.

Amazon EC2 has been experiencing some power issues in a portion of one of their many data centers. Even though the relative percentage of people affected might be small, when you have as many customers as AWS does, a small fraction can still be a large absolute number of customers who are affected.

Naturally, some customers will be upset about not having access to their systems, but in the time it takes to write a complaint, you might be able to move your server to new hardware within EC2 and go on with your business

First, lets assume that you are running an EBS boot instance. If you didn’t think they were the way to go before reading this article, I expect to convince you with this one example (and there are a number of other benefits).

Setup

For this demo, I’m going to start an instance. In your situation this instance represents the currently running server that you are depending on and which has valuable software, configuration, and perhaps data on its EBS volume(s). I’m also going to drop a note on the EBS root disk on my running instance so that I know it is the one I wanted to preserve. Again, this is just setting things up for the demo:

# SKIP THIS ENTIRE SECTION IF YOU ALREADY HAVE AN INSTANCE RUNNING
keypair=YOURKEYPAIR
sshkey=.ssh/$keypair.pem # or wherever you keep it
region=us-west-1 # pick your region
zone=${region}a # pick your availability zone
type=m1.small # pick your size
amiid=ami-cb97c68e # Ubuntu 10.04 Lucid, 32-bit, EBS boot in that region
oldinstanceid=$(ec2-run-instances \
  --key $keypair \
  --region $region \
  --availability-zone $zone \
  --instance-type $type \
  $amiid |
  egrep ^INSTANCE | cut -f2)
echo "instanceid=$oldinstanceid"

while host=$(ec2-describe-instances --region $region "$oldinstanceid" | 
  egrep ^INSTANCE | cut -f4) && test -z $host; do echo -n .; sleep 1; done
echo host=$host

echo "save the volume" | ssh -i $sshkey ubuntu@$host tee README.txt

Moving to a New Instance

Now, we pretend that the above created instance has failed in some way and we can no longer access it. Here’s how we get our server running on a new instance:

  1. If you can, stop the old instance. This increases your chance of keeping the file system consistent on the EBS volume. If the instance has really failed and this step does not work, then skip it.

     ec2-stop-instances --region $region $oldinstanceid
    
  2. Run a new instance with the same startup parameters as your old instance.

     newinstanceid=$(ec2-run-instances \
       --key $keypair \
       --region $region \
       --availability-zone $zone \
       --instance-type $type \
       $amiid |
       egrep ^INSTANCE | cut -f2)
     echo "newinstanceid=$newinstanceid"
    
  3. Wait until the new instance is running and then stop (not terminate) the new instance and detach the EBS boot volume from it. Delete the volume as it has nothing of importance, having just been created.

     ec2-stop-instances --region $region $newinstanceid
     newebsroot=$(ec2-describe-instances --region $region $newinstanceid |
       grep ^BLOCKDEVICE | grep /dev/sda1 | cut -f3)
     ec2-detach-volume --force --region $region $newebsroot
     ec2-delete-volume --region $region $newebsroot
    
  4. Detach the old (valuable) EBS root volume from the old (broken) instance.

     oldebsroot=$(ec2-describe-instances --region $region $oldinstanceid |
       grep ^BLOCKDEVICE | grep /dev/sda1 | cut -f3)
     ec2-detach-volume --force --region $region $oldebsroot
    
  5. Attach the old (valuable) EBS root volume to the new (stopped) instance.

     ec2-attach-volume \
       --region $region \
       -d /dev/sda1 \
       -i $newinstanceid \
       $oldebsroot
    

    If you had multiple EBS volumes attached to the old instance, you would move each one over in a similar manner.

  6. Restart the new instance which is now going to boot with the original volume.

     ec2-start-instances --region $region $newinstanceid
    

Voila! You have moved your server from an old, perhaps broken instance to new (or at least different) hardware keeping the same file system, and it took only a few minutes! If you’d like, ssh to the new instance and make sure that your valuable information is still there.

If you had an Elastic IP address associated with the old instance, you would move it to the new instance.

Cleanup

You may terminate the old instance if you are comfortable that you won’t need it any more. If you were following this demo as an exercise, you should also terminate the new instance. Since you manually attached the old volume to the new instance yourself, it will not be deleted automatically when the instance is terminated. You can modify the instance attributes to change the delete-on-termination flag for the volume or simply delete it manually.

# BEWARE! Don't copy these blindly, but think about what you should do
ec2-terminate-instances --region $region $oldinstanceid
ec2-terminate-instances --region $region $newinstanceid
ec2-delete-volume --region $region $oldebsroot

Tips

This above process can also be used when your instance is running fine, but you want to move to a different instance type (size) of the same architecture. For example, you could move from m1.small up to c1.medium, or from m2.4xlarge down to c1.xlarge. Update: I wasn’t thinking clearly when I wrote that last sentence. It is possible to change the instance type much more easily: Simply stop the instance, use ec2-modify-instance-attribute, and start it up again. Read more about this method.

You can also resize the root disk of a running EC2 instance using the same basic principle of swapping out an EBS root volume on a running instance.

[Update 2011-02-07: Point out simpler method to move to new hardware.]
[Update 2012-01-08: Link to article on changing instance type.]

Upgrading Servers to Ubuntu 10.04 Lucid on Amazon EC2

Ubuntu 10.04 Lucid is expected to be released this Thursday (April 29, 2010). This is the first LTS (long term support) release since Ubuntu 8.04 Hardy, which has itself been a stable and reliable server platform for years both inside and outside of Amazon EC2.

If you’re starting a new server project on EC2, Ubuntu 10.04 Lucid would be a great release to start with. It will be supported for five years with security patches and major bug fixes.

If you are already running a different Ubuntu release on EC2, you should consider planning an upgrade to Lucid. Here are my thoughts for each release you might currently be running on EC2:

  • Ubuntu 8.04 Hardy should continue to be fairly stable for a while, though there does not appear to be a lot of EC2-specific development being applied to the AMIs, so you may want to upgrade to Lucid LTS at your leisure. You will probably want to start new instances, though with Scott’s comment below, it might be possible to upgrade in-place on EBS boot from Hardy to Lucid. If you happen to still be running the Alestic Hardy AMIs, you should at least consider upgrading to the Hardy AMIs published by Canonical and listed at the top of Alestic.com.

  • Ubuntu 8.10 Intrepid has reached its end of life and should no longer be used on EC2. If you are still running Intrepid, you should immediately start upgrading to Karmic or Intrepid through starting new instances.

  • Ubuntu 9.04 Jaunty on EC2 is still running on an old Fedora 8 kernel and is not supported by Ubuntu. I’ve been recommending an upgrade from Jaunty for a while. I continue to recommend upgrading to Karmic (if you want something that has been tested in production for a while) or Lucid (once you’re comfortable with the release).

  • Ubuntu 9.10 Karmic still works great on EC2, though there are a couple bothersome bugs (like hostname reset on reboot) that should be fixed by upgrading to Lucid. If you are running an EBS boot Karmic instance, then it is possible to upgrade in place, but it’s also fine to simply start new instances.

  • Ubuntu 10.04 Lucid pre-release instances should be upgraded to the released version. If you are running EBS boot Lucid, then it is possible to upgrade in place even if there are a newer kernel and ramdisk.

Scott Moser of Canonical has laid out the steps necessary to upgrade an EC2 instance in place without having to start a new EC2 instance. This only works for instances that are currently running an EBS boot pre-release Lucid AMI or an EBS boot 9.10 Karmic AMI. All other servers will need to be upgraded by starting new Lucid instances to replace the old instances. Fortunately, EC2 makes that easy.

In fact, EC2 makes starting new replacement instances so convenient, I would recommend following that procedure in most cases even if you think you can upgrade a server in place. If you encounter any troubles in getting your service to run on Lucid, running a new instance makes it straight forward to fall back to the old server. And, once you are happy with the new server, make it your production system, terminate the old one and don’t look back.

Welcome to the new world of cloud computing where you can instantly acquire and discard server hardware for mere dollars without worrying about large investments, complicated provisioning, and long term support costs.

I will add the new Lucid AMI ids to the table at the top of Alestic.com as soon as I get word that they are ready.

Resizing the Root Disk on a Running EBS Boot EC2 Instance

In a previous article I described how to run an EBS boot AMI with a larger root disk size than the default. That’s fine if you know the size you want before running the instance, but what if you have an EC2 instance already running and you need to increase the size of its root disk without running a different instance?

As long as you are ok with a little down time on the EC2 instance (few minutes), it is possible to change out the root EBS volume with a larger copy, without needing to start a new instance.

Let’s walk through the steps on a sample Ubuntu 16.04 LTS Xenial HVM instance. I tested this with ami-40d28157 but check for the latest AMI ids.

On the instance we check the initial size of the root file system (8 GB):

$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  913M  6.5G  13% /
Public EBS Boot AMIs for Ubuntu on Amazon EC2

If you’ve been following along, you probably know that I have been recommending that folks using EC2 switch to the official Ubuntu AMIs published by Canonical (Hardy or Karmic). I have been building and publishing Ubuntu AMIs since 2007 (including Dapper, Edgy, Feisty, Gutsy, Hardy, Intrepid, Karmic), but the last year my focus on this project has been to transition these responsibilities to Canonical who have more time and resources to support the initiative.

I’m happy to say that I’ve finally followed my own advice. For my personal Amazon EC2 servers (including for the Alestic.com web site) I am using Ubuntu 9.10 Karmic images published for EC2 by Canonical.

While I was making the transition, I also switched to EBS boot AMIs. However, since it sounds like Canonical is not planning to publish EBS boot AMIs until Lucid, I decided to continue in service to the community and make available EBS boot AMIs for running Ubuntu on EC2.

I have published EBS boot AMIs for Ubuntu 9.10 Karmic and Ubuntu 8.04 Hardy, both 32- and 64-bit architectures, in all current EC2 regions, for a total of a dozen new AMIs.

I chose to use the exact Ubuntu images which Canonical built for running Ubuntu on EC2. This means that these EBS boot AMIs work exactly the same as the official Canonical AMIs including ssh to the ubuntu user. Again, even though I’m publishing the EBS boot AMIs for Karmic and Hardy, the contents of the image were built by Canonical.

The EBS boot AMIs are listed on Alestic.com. I have restructured the table to better feature Canonical AMIs, and now you need to pick an EC2 region to see the IDs.

Give the EBS boot AMIs a spin and let me know if you run into any issues.

Three Ways to Protect EC2 Instances from Accidental Termination and Loss of Data

Here are a few little-publicized benefits that were launched with Amazon EC2’s new EBS boot instances: You can lock them from being accidentally terminated; you can prevent termination even when you try to shutdown the server from inside the instance; and you can automatically save your data storage when they are terminated.

In discussions with other EC2 users, I’ve found that it is a common feeling of near panic when you go to terminate an instance and you check very carefully to make sure that you are deleting the right instance instead of an active production system. Slightly less common but even worse is the feeling of dread when you realize you just casually terminated the wrong EC2 instance.

It is always recommended to set up your AWS architecture so that you are able to restart production systems on EC2 easily, as they could, in theory, be lost at any point due to hardware failure or other actions. However, new features released with the EBS boot instances help reduce the risks associated with human error.

The following examples will demonstrate with the EC2 API command line tools ec2-run-instances, ec2-modify-instance-attribute, and ec2-terminate-instances. Since AWS is Amazon’s “web service” all of these features are also available through the API and should be coming available (if they aren’t already) in other tools, packages, and interfaces using the web service API.

  1. Shutdown Behavior

First, let’s look at what happens when you run a command like the following in an EC2 instance:

sudo shutdown -h now
# or, equivalently and much easier to type:
sudo halt

Using the legacy S3 based AMIs, either of the above terminates the instance and you lose all local and ephemeral storage (boot disk and /mnt) forever. Hope you remembered to save the important stuff elsewhere!

A shutdown from within an EBS boot instance, by default, will initiate a “stop” instead of a “terminate”. This means that your instance is not in a running state and not getting charged, but the EBS volumes still exist and you can “start” the same instance id again later, losing nothing.

You can explicitly set or change this with the --instance-initiated-shutdown-behavior option in ec2-run-instances. For example:

ec2-run-instances \
  --instance-initiated-shutdown-behavior stop \
  [...]

This is the first safety precaution and, as mentioned, should already be built in, though it doesn’t hurt to document your intentions with an explicit option.

  1. Delete on Termination

Though EBS volumes created and attached to an instance at instantiation are preserved through a “stop”/“start” cycle, by default they are destroyed and lost when an EC2 instance is terminated. This behavior can be changed with a delete-on-termination boolean value buried in the documentation for the --block-device-mapping option of ec2-run-instances.

Here is an example that says “Don’t delete the root EBS volume when this instance is terminated”:

ec2-run-instances \
  --block-device-mapping /dev/sda1=::false \
  [...]

If you are associating other EBS snapshots with the instance at run time, you can also specify that those created EBS volumes should be preserved past the lifetime of the instance:

  --block-device-mapping /dev/sdh=SNAPSHOTID::false

When you use these options, you’ll need to manually clean up the EBS volume(s) if you no longer want to pay for the storage costs after an instance is gone.

Note: EBS volumes attached to an instance after it is already running are, by default, left alone on termination (i.e., not deleted). The default rules are: If the EBS volume is created by the creation of the instance, then the termination of the instance deletes the volumes. If you created the volume explicitly, then you must delete it explicitly.

  1. Disable Termination

This is my favorite new safety feature. Years ago, I asked Amazon for the ability to lock an EC2 instance from being accidentally terminated, and with the launch of EBS boot instances, this is now possible. Using ec2-run-instances, the key option is:

ec2-run-instances \
  --disable-api-termination \
  [...]

Now, if you try to terminate the instance, you will get an error like:

Client.OperationNotPermitted: The instance 'i-XXXXXXXX' may not be terminated.
Modify its 'disableApiTermination' instance attribute and try again.

Yay!

To unlock the instance and allow termination through the API, use a command like:

ec2-modify-instance-attribute \
  --disable-api-termination false \
  INSTANCEID

Then end it all with:

ec2-terminate-instances INSTANCEID

Oh, wait! did you make sure you unlocked and terminated the right instance?! :)

Note: disableApiTermination is also available when you run S3 based AMIs today, but since the instance can still be terminated from inside (shutdown/halt) I am moving towards EBS based instances for all around security.

Put It Together

Here’s a command line I just used to start up an EC2 instance of an Ubuntu 9.10 Karmic EBS boot AMI which I intend to use as a long-term production server with strong uptime and data safety requirements. I wanted to add all the protection available:

ec2-run-instances \
  --key $keypair \
  --availability-zone $availabilityzone \
  --user-data-file $startupscript \
  --block-device-mapping /dev/sda1=::false \
  --block-device-mapping /dev/sdh=$snapshotid::false \
  --instance-initiated-shutdown-behavior stop \
  --disable-api-termination \
  $amiid

With regular EBS snapshots of the volumes, copies to off site backups, and documented/automated restart processes, I feel pretty safe.

Runtime Modification

If you have a valuable running EC2 instance, but forgot to specify the above options to protect it when you started it, or you are now ready to turn a test system into a production system, you can still lock things after the fact using the ec2-modify-instance-attribute command (or equivalent API call).

For example:

ec2-modify-instance-attribute --disable-api-termination true INSTANCEID
ec2-modify-instance-attribute --instance-initiated-shutdown-behavior stop INSTANCEID
ec2-modify-instance-attribute --block-device-mapping /dev/sda1=:false INSTANCEID
ec2-modify-instance-attribute --block-device-mapping /dev/sdh=:false INSTANCEID

Notes:

  • Only one type of option can be specified with each invocation.

  • The --disable-api-termination option has no argument value when used in in ec2-run-instances, but it takes a value of true or false in ec2-modify-instance-attribute.

  • You don’t specify the snapshot id when changing the delete-on-terminate state of an attached EBS volume.

  • You may change the delete-on-terminate to “true” for an EBS volume you created and attached after the instance was running. By default it will not be deleted since you created it.

  • The above --block-device-mapping option requires recent ec2-api-tools to avoid a bug.

You can find out the current state of the options using ec2-describe-instance-attribute, which only takes one option at a time:

ec2-describe-instance-attribute --disable-api-termination INSTANCEID
ec2-describe-instance-attribute --instance-initiated-shutdown-behavior INSTANCEID
ec2-describe-instance-attribute --block-device-mapping INSTANCEID

Unfortunately, the block-device-mapping output does not currently show the state of delete-on-termination value, but thanks to Andrew Lusk for pointing out in a comment below that it is available through the API. Here’s a hack which extracts the information from the debug output:

ec2-describe-instance-attribute -v --block-device-mapping INSTANCEID | 
  perl -0777ne 'print "$1\t$2\t$3\n" while 
  m%<deviceName>(.*?)<.*?<volumeId>(.*?)<.*?<deleteOnTermination>(.*?)<%sg'

While we’re on the topic of EC2 safety, I should mention the well known best practice of separating development and production systems by using a different AWS account for each. Amazon lets you create and use as many accounts as you’d like even with the same credit card as long as you use a unique email address for each. Now that you can share EBS snapshots between accounts, this practice is even more useful.

What additional safety precautions do you take with your EC2 instances and data?

[Update 2011-09-13: Corrected syntax for modifying block-device-mapping on running instance (only one “:")]

Building EBS Boot AMIs Using Canonical's Downloadable EC2 Images

In the last article, I described how to use the vmbuilder software to build an EBS boot AMI from scratch for running Ubuntu on EC2 with a persistent root disk.

In the ec2ubuntu Google group, Scott Moser pointed out that users can take advantage of the Ubuntu images for EC2 that Canonical has already built with vmbuilder. This can simplify and speed up the process of building EBS boot AMIs for the rest of us.

Let’s walk through the steps, creating an EBS boot AMI for Ubuntu 9.10 Karmic.

  1. Run an instance of the Ubuntu 9.10 Karmic AMI, either 32-bit or 64-bit depending on which architecture AMI you wish to build. Make a note of the resulting instance id:

     # 32-bit
     instanceid=$(ec2-run-instances   \
       --key YOURKEYPAIR              \
       --availability-zone us-east-1a \
       ami-1515f67c |
       egrep ^INSTANCE | cut -f2)
     echo "instanceid=$instanceid"
    
     # 64-bit
     instanceid=$(ec2-run-instances   \
       --key YOURKEYPAIR              \
       --availability-zone us-east-1a \
       --instance-type m1.large       \
       ami-ab15f6c2 |
       egrep ^INSTANCE | cut -f2)
     echo "instanceid=$instanceid"
    

    Wait for the instance to move to the “running” state, then note the public hostname:

     while host=$(ec2-describe-instances "$instanceid" | 
       egrep ^INSTANCE | cut -f4) && test -z $host; do echo -n .; sleep 1; done
     echo host=$host
    

    Copy your X.509 certificate and private key to the instance. Use the correct locations for your credential files:

     rsync                            \
       --rsh="ssh -i YOURKEYPAIR.pem" \
       --rsync-path="sudo rsync"      \
       ~/.ec2/{cert,pk}-*.pem         \
       ubuntu@$host:/mnt/
    

    Connect to the instance:

     ssh -i YOURKEYPAIR.pem ubuntu@$host
    
  2. Install EC2 API tools from the Ubuntu on EC2 ec2-tools PPA because they are more up to date than the ones in Karmic, letting us register EBS boot AMIs:

     export DEBIAN_FRONTEND=noninteractive
     echo "deb http://ppa.launchpad.net/ubuntu-on-ec2/ec2-tools/ubuntu karmic main" |
       sudo tee /etc/apt/sources.list.d/ubuntu-on-ec2-ec2-tools.list &&
     sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9EE6D873 &&
     sudo apt-get update &&
     sudo -E apt-get dist-upgrade -y &&
     sudo -E apt-get install -y ec2-api-tools
    
  3. Set up some parameters:

     codename=karmic
     release=9.10
     tag=server
     if [ $(uname -m) = 'x86_64' ]; then
       arch=x86_64
       arch2=amd64
       ebsopts="--kernel=aki-fd15f694 --ramdisk=ari-c515f6ac"
       ebsopts="$ebsopts --block-device-mapping /dev/sdb=ephemeral0"
     else
       arch=i386
       arch2=i386
       ebsopts="--kernel=aki-5f15f636 --ramdisk=ari-0915f660"
       ebsopts="$ebsopts --block-device-mapping /dev/sda2=ephemeral0"
     fi
    
  4. Download and unpack the latest released Ubuntu server image file. This contains the output of vmbuilder as run by Canonical.

     imagesource=http://uec-images.ubuntu.com/releases/$codename/release/unpacked/ubuntu-$release-$tag-uec-$arch2.img.tar.gz
     image=/mnt/$codename-$tag-uec-$arch2.img
     imagedir=/mnt/$codename-uec-$arch2
     wget -O- $imagesource |
       sudo tar xzf - -C /mnt
     sudo mkdir -p $imagedir
     sudo mount -o loop $image $imagedir
    
  5. [OPTIONAL] At this point /mnt/$image contains a mounted filesystem with the complete Ubuntu image as released by Canonical. You can skip this step if you just want an EBS boot AMI which is an exact copy of the released S3 based Ubuntu AMI from Canonical, or you can make any updates, installations, and customizations you’d like to have in your resulting AMI.

    In this example, we’ll perform similar steps as the previous tutorial and update the software packages to the latest releases from Ubuntu. Remember that the released EC2 image could be months old.

     # Allow network access from chroot environment
     sudo cp /etc/resolv.conf $imagedir/etc/
     # Fix what I consider to be a bug in vmbuilder
     sudo rm -f $imagedir/etc/hostname
     # Add multiverse
     sudo perl -pi -e 's%(universe)$%$1 multiverse%' \
       $imagedir/etc/ec2-init/templates/sources.list.tmpl
     # Add Alestic PPA for runurl package (handy in user-data scripts)
     echo "deb http://ppa.launchpad.net/alestic/ppa/ubuntu karmic main" |
       sudo tee $imagedir/etc/apt/sources.list.d/alestic-ppa.list
     sudo chroot $imagedir \
       apt-key adv --keyserver keyserver.ubuntu.com --recv-keys BE09C571
     # Add ubuntu-on-ec2/ec2-tools PPA for updated ec2-ami-tools
     echo "deb http://ppa.launchpad.net/ubuntu-on-ec2/ec2-tools/ubuntu karmic main" |
       sudo tee $imagedir/etc/apt/sources.list.d/ubuntu-on-ec2-ec2-tools.list
     sudo chroot $imagedir \
       apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9EE6D873
     # Upgrade the system and install packages
     sudo chroot $imagedir mount -t proc none /proc
     sudo chroot $imagedir mount -t devpts none /dev/pts
     cat <<EOF > $imagedir/usr/sbin/policy-rc.d
     #!/bin/sh
     exit 101
     EOF
     chmod 755 $imagedir/usr/sbin/policy-rc.d
     DEBIAN_FRONTEND=noninteractive
     sudo chroot $imagedir apt-get update &&
     sudo -E chroot $imagedir apt-get dist-upgrade -y &&
     sudo -E chroot $imagedir apt-get install -y runurl ec2-ami-tools
     sudo chroot $imagedir umount /proc
     sudo chroot $imagedir umount /dev/pts
     rm -f $imagedir/usr/sbin/policy-rc.d
    

    Again, the above step is completely optional and can be skipped to create the EBS boot AMI that Canonical would have published.

  6. Copy the image files to a new EBS volume, snapshot it, and register the snapshot as an EBS boot AMI. Make a note of the resulting AMI id:

     size=15 # root disk in GB
     now=$(date +%Y%m%d-%H%M)
     prefix=ubuntu-$release-$codename-$tag-$arch-$now
     description="Ubuntu $release $codename $tag $arch $now"
     export EC2_CERT=$(echo /mnt/cert-*.pem)
     export EC2_PRIVATE_KEY=$(echo /mnt/pk-*.pem)
     volumeid=$(ec2-create-volume --size $size --availability-zone us-east-1a |
       cut -f2)
     instanceid=$(wget -qO- http://instance-data/latest/meta-data/instance-id)
     ec2-attach-volume --device /dev/sdi --instance "$instanceid" "$volumeid"
     while [ ! -e /dev/sdi ]; do echo -n .; sleep 1; done
     sudo mkfs.ext3 -F /dev/sdi
     ebsimage=$imagedir-ebs
     sudo mkdir $ebsimage
     sudo mount /dev/sdi $ebsimage
     sudo tar -cSf - -C $imagedir . | sudo tar xvf - -C $ebsimage
     sudo umount $ebsimage
     ec2-detach-volume "$volumeid"
     snapshotid=$(ec2-create-snapshot "$volumeid" | cut -f2)
     ec2-delete-volume "$volumeid"
     while ec2-describe-snapshots "$snapshotid" | grep -q pending
       do echo -n .; sleep 1; done
     ec2-register                   \
       --architecture $arch         \
       --name "$prefix"             \
       --description "$description" \
       $ebsopts                     \
       --snapshot "$snapshotid"
    
  7. Depending on what you want to keep from the above process, there are various things that you might want to clean up.

    If you no longer want to use an EBS boot AMI:

     ec2-deregister $amiid
     ec2-delete-snapshot $snapshotid
    

    When you’re done with the original instance:

     ec2-terminate-instance $instanceid
    

In this example, I set /mnt to the first ephemeral store on the instance even on EBS boot AMIs. This more closely matches the default on the S3 based AMIs, but means that /mnt will not be persistent across a stop/start of an EBS boot instance. If Canonical starts publishing EBS boot AMIs, they may or may not choose to make the same choice.

Community feedback, bug reports, and enhancements for these instructions are welcomed.

[Update 2009-01-14: Wrapped upgrade/installs inside of /usr/sbin/policy-rc.d setting to avoid starting daemons in chroot environment.]

[Update 2010-01-22: New location for downloadable Ubuntu images.]

[Update 2010-03-26: Path tweak, thanks to paul.]