June 2009 Archives

NOTE: This is an article from 2009, back before EBS boot instances were available on Amazon EC2. I recommend you use EBS boot instances which make it trivial to create new AMIs (single command/API call). Please stop reading this article now and convert to EBS boot AMIs!

When you start up an instance (server) on Amazon EC2, you need to pick the image or AMI (Amazon Machine Image) to run. This determines the Linux distribution and version as well as the initial software installed and how it is configured.

There are a number of public images to choose from with EC2 including the Ubuntu and Debian image published on http://alestic.com but sometimes it is appropriate to create your own private or public images. There are two primary ways to create an image for EC2:

  1. Create an EC2 image from scratch. This process lets you control every detail of what goes into the image and is the easiest way to automate image creation.

  2. Rebundle a running EC2 instance into a new image. This approach is the topic of the rest of this article.

After you rebundle a running instance to create a new image, you can then run new EC2 instances of that image. Each instance starts off looking exactly like the original instance as far as the files on the disk go (with a few exceptions).

This guide is primarily written in the context of running Ubuntu on EC2, but the concepts should apply without too much changing on Debian and other Linux distributions.

To use this rebundling approach, you start by running an instance of an image that (1) is as close as possible to the image you want to create, and (2) is published by a source you trust. You then proceed to install software and configure that instance so that it contains exactly what you want to be available on new instances right down to the startup scripts.

The next step is to bundle the instance’s disk image into a new AMI, but before we get to that, it is important to understand a few things about security.


If you are creating a new EC2 image, you need to be very careful what pieces of information you inadvertently leave on the image, especially if you have the goal of publishing it as a public AMI. Anybody who runs an instance of that AMI will have access to the files you included in the bundle, and there is no way to modify an AMI after it has been created (though you can delete it).

For example, you don’t want to leave your AWS certificate or private key on the disk. You’ll even want to clear out the shell history file in case you had typed secret information in commands or in setting environment variables.

You also want to consider the security concerns from the perspective of the people who run the new image. For example, you don’t want to leave any passwords active on accounts. You should also make sure you don’t include your public ssh key in authorized_keys files. Leaving a back door into other people’s servers is in poor taste even if you have no intention of ever using it.

Here are some sample commands, but only you can decide if this wipes out too much or what other files you need to exclude depending on how you set up and used the instance you are bundling:

sudo rm -f /root/.*hist* $HOME/.*hist*
sudo rm -f /var/log/*.gz
sudo find /var/log -name mysql -prune -o -type f -print | 
  while read i; do sudo cp /dev/null $i; done

Whole directories can be excluded from the image using the --exclude option of the ec2-bundle-vol command (see below).


Now we’re ready to bundle the actual EC2 image (AMI). To start, you need to copy your certificate and key to the instance ephemeral storage. Adjust the sample command to use the appropriate keypair file for authentication and the appropriate location of your certification and private key files. If you are not running a modern Ubuntu image, then change remoteuser to “root”.


rsync   --rsh="ssh -i KEYPAIR.pem"   --rsync-path="sudo rsync"   PATHTOKEYS/{cert,pk}-*.pem   $remoteuser@$remotehost:/mnt/

Set up some environment variables for convenience in the following commands. A single S3 bucket can be used for multiple AMIs. The manifest prefix should be descriptive, especially if you plan to publish the AMI publicly, as it is the only piece of documentation many users will see when they look through AMI lists. At a minimum, I recommend including the Linux distribution (e.g, “ubuntu”), the architecture (e.g., “i386” or “32”), and the date (e.g., “20090621”), as well as some tag that indicates the special nature of the image (e.g., “desktop” or “lamp”).


On the EC2 instance itself, you also set up some environment variables to help the bundle and upload commands. You can find these values in your EC2 account.

export AWS_USER_ID=<your-value>
export AWS_ACCESS_KEY_ID=<your-value>
export AWS_SECRET_ACCESS_KEY=<your-value>

if [ $(uname -m) = 'x86_64' ]; then

Bundle the files on the current instance into a copy of the image under /mnt:

sudo -E ec2-bundle-vol   -r $arch   -d /mnt   -p $prefix   -u $AWS_USER_ID   -k /mnt/pk-*.pem   -c /mnt/cert-*.pem   -s 10240   -e /mnt,/root/.ssh,/home/ubuntu/.ssh

Upload the bundle to a bucket on S3:

ec2-upload-bundle    -b $bucket    -m /mnt/$prefix.manifest.xml    -a $AWS_ACCESS_KEY_ID    -s $AWS_SECRET_ACCESS_KEY

Now that the AMI files have been uploaded to S3, you register the image as a new AMI. This is done back on your local system (with the API tools installed):

ec2-register   --name "$bucket/$prefix"   $bucket/$prefix.manifest.xml

The output of this command is the new AMI id which is used to run new instances of that image.

It is important to use the same account access information for the ec2-bundle-vol and ec2-register commands even though they are run on different systems. If you don’t you’ll get an error indicating you don’t have the rights to register the image.

Public Images

By default, the new EC2 image is private, which means it can only be seen and run by the user who created it. You can share access with another individual account or with the public.

To let another EC2 user run the image without giving access to the world:

ec2-modify-image-attribute -l -a <other-user-id> <ami-id>

To let all other EC2 users run instances of your image:

ec2-modify-image-attribute -l -a all <ami-id>


AWS will charge you standard S3 charges for the stored AMI files which comes out to $0.15 per GB per month. Note, however, that the bundling process uses sparse files and compression, so the final storage size is generally very small and your resulting cost may only be pennies per month.

The AMI owner incurs no charge when users run the image in new instances. The users who run the AMI are responsible for the standard hourly instance charges.


Before removing any public image, please consider the impact this might have on people who depend on that image to run their business. Once you publish an AMI, there is no way to tell how many users are regularly creating instances of that AMI and expecting it to stay available. There is also no way to communicate with these users to let them know that the image is going away.

If you decide you want to remove an image anyway, here are the steps to take.

Deregister the AMI

ec2-deregister ami-XXX

Delete the AMI bundle in S3:

ec2-delete-bundle   --access-key $AWS_ACCESS_KEY_ID   --secret-key $AWS_SECRET_ACCESS_KEY   --bucket $bucket   --prefix $prefix

[Update 2009-09-12: Security tweak for running under non-root.] [Update 2010-02-01: Update to use latest API/AMI tools and work for Ubuntu 9.10 Karmic.]

Ubuntu Karmic Koala Alpha is being developed and will be released as Ubuntu 9.10 in October. If you want to play around with Karmic Alpha on Amazon EC2, I have published new AMIs in the US and EU regions for 32- and 64-bit:


A Karmic desktop image for EC2 is also available if you wish to monitor progress in that area.

Warning! Karmic is an unstable alpha developer version and is not intended for use in anything resembling a production environment.

Please note that we are still defaulting to Amazon’s 2.6.21fc8 kernel which, though functional and stable, is getting older and older for each new release of Ubuntu. One effect of this is that AppArmor will not be enabled, though this should not affect the functionality of any software.


Amazon EC2 currently has a limit of 1,000 GB (1 TB) for EBS volumes (Elastic Block Store). It is possible to create file systems larger than this limit using RAID 0 across multiple EBS volumes. Using RAID 0 can also improve the performance of the file system reducing total IO wait as demonstrated in a number of published EBS performance tests.

The following instructions walk through one way to set up RAID 0 across multiple EBS volumes. Note that there is a limit on the size of a file system on 32-bit instances, but 64-bit instances can get unreasonably large. This test was run with 40 EBS volumes of 1,000 GB each for a total of 40,000 GB (40 TB) in the resulting file system.

Actual command line output showing the size of the RAID:

# df /vol
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md0             41942906368      1312 41942905056   1% /vol

# df -h /vol
Filesystem            Size  Used Avail Use% Mounted on
/dev/md0               40T  1.3M   40T   1% /vol

These commands can run in less than 10 minutes and this could probably be reduced further by parallelizing the creation and attaching of the EBS volumes.

Note that the default limit is 20 EBS volumes per EC2 account. You can request an increase from Amazon if you need more.

Caution: 40 TB of EBS storage on EC2 will cost $4,000 per month plus usage charges.


Start a 64-bit instance (say, Ubuntu 8.04 Hardy from http://alestic.com). Use your own KEYPAIR:

ec2-run-instances   --key KEYPAIR   --instance-type c1.xlarge   --availability-zone us-east-1a   ami-0772946e

Configurable parameters (set on both local host and on EC2 instance):


On the local host (with EC2 API tools installed)…

Create and attach EBS volumes:

devices=$(perl -e 'for$i("h".."k"){for$j("",1..15){print"/dev/sd$i$j\n"}}'|
           head -$volumes)
while [ $i -le $volumes ]; do
  volumeid=$(ec2-create-volume -z us-east-1a --size $size | cut -f2)
  echo "$i: created  $volumeid"
  ec2-attach-volume -d $device -i $instanceid $volumeid
  volumeids="$volumeids $volumeid"
  let i=i+1
echo "volumeids='$volumeids'"

On the EC2 instance (after setting parameters as above)…

Install software:

sudo apt-get update &&
sudo apt-get install -y mdadm xfsprogs

Set up the RAID 0 device:

devices=$(perl -e 'for$i("h".."k"){for$j("",1..15){print"/dev/sd$i$j\n"}}'|
           head -$volumes)

yes | sudo mdadm   --create /dev/md0   --level 0   --metadata=1.1   --chunk 256   --raid-devices $volumes   $devices

echo DEVICE $devices       | sudo tee    /etc/mdadm.conf
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf

Create the file system (pick your preferred file system type)

sudo mkfs.xfs /dev/md0


echo "/dev/md0 $mountpoint xfs noatime 0 0" | sudo tee -a /etc/fstab
sudo mkdir $mountpoint
sudo mount $mountpoint

Check it out:

df -h $mountpoint

When you’re done with it and want to destroy the data and stop paying for storage, tear it down:

sudo umount $mountpoint
sudo mdadm --stop /dev/md0

Terminate the instance:

sudo shutdown -h now

On the local host (with EC2 API tools installed)…

Detach and delete volumes:

for volumeid in $volumeids; do
  ec2-detach-volume $volumeid

for volumeid in $volumeids; do
  ec2-delete-volume $volumeid


This article was originally posted on the EC2 Ubuntu group.

Thanks to M. David Peterson for the basic mdadm instructions:

[Update 2012-01-21: Added —chunk 256 based on community recognized best practices.]

New updates have been released for the Ubuntu and Debian AMIs (EC2 images) published on:


The following improvements are included in this release:

  • Ubuntu 9.04 Jaunty now uses an Ubuntu mirror inside of EC2 hosted by RightScale. This dramatically improves the performance of updates and upgrades. Hardy and Intrepid were already using the mirrors inside EC2.

  • The Hardy, Intrepid, and Jaunty images have been enhanced to add failover for Ubuntu archive mirror hosts across availability zones (data centers). This change lets an Ubuntu instance perform package updates and upgrades even if one or two of the EC2 availability zones are completely unavailable.

  • The denyhosts package is now installed on desktop images for improved security. The Amazon abuse team has identified the Ubuntu desktop images as a source of compromised systems. The cause for this is believed to be insecure passwords set by users, since the desktop images have PasswordAuthentication enabled by default so that the NX client can connect. The denyhosts package blocks ssh attacks by adding remote systems to /etc/hosts.deny if they keep failing password logins.

    The published Ubuntu and Debian server images continue to have PasswordAuthentication turned off by default for improved security. If you choose to turn this on, I recommend installing a package like denyhosts and using software like the following to generate secure passwords:

    sudo apt-get install pwgen
    pwgen -s 10 1
  • The EC2 AMI tools have been upgraded to version 1.3-31780.

  • All software packages have been updated to versions current as of 2009-06-14.

Community support for Ubuntu on EC2 is available in this group:


Community support for Debian on EC2 is available in this group:


The 32-bit Debian squeeze images and the 32-bit Debian etch desktop image have not been updated yet due to problems with initial package installation. Images will be released when these issues are resolved.

The following enhancements have been made to the ec2ubuntu-build-ami software which is used to build Ubuntu and Debian images for EC2.

  • New --kernel and --ramdisk options have been added to specify AKI and ARI. If you specify a different kernel, you should also specify kernel modules with --package or install them with the --script option.

  • Support has been removed for Ubuntu Edgy, Feisty, and Gutsy. These releases have reached their end of life. To improve the clarity of the code this software no longer supports building these images.

  • There has been a typo fix for $originaldir for folks who were using the --script option.

  • There has been a typo fix for /dev/ptmx though it apparently had no effect given how these images are built.

Thanks to Stephen Parkes and Paul Dowman for submitting patches.


Reposting a response I wrote to a user on Amazon’s EC2 forum who is having a hard time finding good engineers with AWS experience:

If there aren’t enough talented engineers who already know AWS, consider hiring talented engineers who can learn AWS.

You might find that there are existing AWS experts who aren’t looking for a full time job, but who are willing to be consulting resources to help bring your talent quickly up to speed with the ins and outs of building systems appropriately on AWS and to help answer questions and solve problems as they arise.

You should be aware that given the current growth of AWS, your engineers will be in high demand once they have AWS experience, so treat them well :)

Please do encourage your new talent to be active in the community too. It not only helps others, but it also significantly improves their own skills and expertise. I learned a lot of what I know about AWS by trying to solve other people’s problems.

The Twitter wires are aflame with cute quotes on how lightning from a “cloud” took down Amazon’s EC2 “cloud” service. Snarky snippets sell well on Twitter with no research or understanding of the facts behind the issues involved.

Since “the press” is now asking for my opinion, I figured I’d jot down a quick overview of my thoughts on this non-event which has been blown out of proportion. Sorry the press, we’re all the press now (for better or for worse) but you’re welcome to extract quotes with proper attribution :)

I don’t consider lighting taking out some racks of EC2 servers to be an “outage” even though this took down some customers’ running instances. EC2 and the rest of AWS were completely functional. If one or more EC2 instances fail for internal or external reasons, any customer who has built a reasonable elastic architecture on EC2 should be able automatically or even manually to fire up new servers and to fail over with very little downtime, if any.

This was a “failure” or an “error” or a “fault”, not an outage. Architectures built on top of AWS should expect and plan for failures; that’s simply the way the service was designed. AWS provides dramatic resources for detecting and dealing with big and small failures and for building highly redundant, fault tolerant, distributed systems at a global level—instead of at an individual API call or EC2 instance level.

At a normal ISP, if your server goes down, it is a serious problem. You have to wait for the ISP to work to bring it up or drive over to the data center and work on it yourself. With EC2, servers are fairly disposable. When an EC2 server goes down (which is still rare) you have at your fingertips thousands of other servers in a half dozen data centers in multiple countries.

A well designed architecture built on top of EC2 keeps important information (databases, log files, etc) in easy to manage persistent and redundant data stores which can be snapshotted, duplicated, detached, and attached to new servers. EC2 provides advanced data center capabilities few companies can build on their own.

Yes, it can take some time and effort to learn this new way of working with on-demand, self-service, pay-as-you-go hardware infrastructure and sometimes the lessons are learned the hard way, but you’ll be better off in the end.

Elastic IP

Amazon EC2 supports Elastic IP Addresses to implement the effect of having a static IP address for public servers running on EC2. You can point the Elastic IP at any of your EC2 instances, changing the active instance at any time, without changing the IP address seen by the public outside of EC2.

This is a valuable feature for things like web and email servers, especially if you need to replace a failing server or upgrade or downgrade the hardware capabilities of the server, but read on for an insiders’ secret way to use Elastic IP addresses for non-public servers.

Internal Servers

Not all servers should be publicly accessible. For example, you may have an internal EC2 instance which hosts your database server accessed by other application instances inside EC2. You want to architect your installation so that you can replace the database server (instance failure, resizing, etc) but you want to make it easy to get all your application servers to start using the new instance.

There are a number of design approaches which people have used to accomplish this, including:

  1. Hard code the internal IP address into the applications and modify it whenever the internal server changes to a new instance (ugh and ouch).

  2. Run your own DNS server (or use an external DNS service) and change the IP address of the internal hostname to the new internal IP address (extra work and potentially extra failover time waiting for DNS propagation).

  3. Store the internal IP address in something like SimpleDB and change it when you want to point to a new EC2 instance (extra work and requires extra coding for clients to keep checking the SimpleDB mapping)

The following approach is the one I use and is the topic of the rest of this article:

  1. Assign an Elastic IP to the internal instance and use the external Elastic IP DNS name. To switch servers, simply re-assign the Elastic IP to a new EC2 instance

This last option uses a little-known feature of the Elastic IP Address system as implemented by Amazon EC2:

When an EC2 instance queries the external DNS name of an Elastic IP, the EC2 DNS server returns the internal IP address of the instance to which the Elastic IP address is currently assigned.

You may need to read that a couple times to grasp the implications as it is non-obvious that an “external” name will return an “internal” address.

Setting Up

You can create an Elastic IP address in an number of ways including the EC2 Console or the EC2 API command line tools. For example:

$ ec2-allocate-address 

The address returned at this point is the external Elastic IP address. You don’t want to use this external IP address directly for internal server access since you would be charged for network traffic.

The next step is to assign the Elastic IP address to an EC2 instance (which is going to be your internal server):

$ ec2-associate-address -i i-07612d6e
ADDRESS  i-07612d6e

Once the Elastic IP has been assigned to an instance, you can describe that instance to find the external DNS name (which will include the external Elastic IP address in it):

$ ec2-describe-instances i-07612d6e | egrep ^INSTANCE | cut -f4

This is the permanent external DNS name for that Elastic IP address no matter how many times you change the instance to which it is assigned. If you query this DNS name from outside of EC2, it will resolve to the external IP address as shown above:

$ dig +short ec2-75-101-137-243.compute-1.amazonaws.com

However, if you query this DNS name from inside an EC2 instance, it will resolve to the internal IP address for the instance to which it is currently assigned:

$ dig +short ec2-75-101-137-243.compute-1.amazonaws.com

You can now use this external DNS name in your applications on EC2 instances to communicate with the server over the internal EC2 network and you won’t be charged for the network traffic as long as you’re in the same EC2 availability zone.

Changing Servers

If you ever need to move the service to a new EC2 instance, simply reassign the Elastic IP address to the new EC2 instance:

$ ec2-associate-address -i i-3b783452
ADDRESS  i-3b783452

and the original external DNS name will immediately resolve to the internal IP address of the new instance:

$ dig +short ec2-75-101-137-243.compute-1.amazonaws.com

Existing connections will fail and new connections to the external DNS name will automatically be opened on the new instance, using either the public IP address or the private IP address depending on where the client is when requesting DNS resolution.


It is not entirely intuitive to have your application use names like ec2-75-101-137-243.compute-1.amazonaws.com but you can make it clearer by creating a permanent entry in your DNS which points to that name with a CNAME alias. For example, using bind:

db.example.com.    CNAME    ec2-75-101-137-243.compute-1.amazonaws.com.

You can then use db.example.com to refer to the server internally and still not have to update your DNS when you change instances.

Further Notes

Even though you are using an Elastic IP address, you don’t need (and often don’t want) to allow external users to be able to access your internal servers. For example, it is just asking for trouble to expose a MySQL server to the Internet. Keep the security groups tight so that the internal servers and services can only be accessed from your other EC2 instances.

Open TCP connections to the original server will not survive when the Elastic IP address is assigned to a new EC2 instance. Some applications and clients will automatically attempt to re-open a failed connection, getting through to the new server on the new internal IP address, but other applications may need to be kicked or signaled so they attempt a new connection to the server.

When using this approach, you need one Elastic IP address for each internal server which needs to be addressed. AWS accounts default to a limit of 5 Elastic IP addresses, but you can request an increased limit.

How do you solve the problem of connecting internal EC2 servers to each other?

Update 2009-07-20: Correct example host name.
Update 2012-03-06: Here’s the original forum post from Amazon that revealed this trick: Elastic internal IP address
Update 2012-04-02: Use different internal IP address for new instance example.

UPDATE-3: As of 2009-06-16 02:35a, Canonical has restored the Ubuntu mirror for EC2 in the US region. It looks like everything is operating normally now.

UPDATE-2: Canonical has restored the Ubuntu mirror for EC2 in the EU region.

UPDATE-1: The DNS names for the Canonical Ubuntu mirrors on EC2 have been temporarily switched to point to the Ubuntu mirrors outside of EC2. This is a great idea that gets things working again until the EC2 mirrors can be brought back up. If you really want to use mirrors inside EC2 for performance or (minor) cost considerations, you could still switch to the RightScale mirrors.

As I write this, the Ubuntu archive mirrors on EC2 run by Canonical are currently unavailable in both the US and European regions. If you are running the Ubuntu images for EC2 published by Canonical, this prevents you from being able to apt-get update or apt-get upgrade

The Canonical IS team is currently on the job investigating and correcting the issue, but if you need a quick fix in the mean time, you can run the following command on the instance to switch to a Canonical Ubuntu mirror outside of EC2 (standard EC2 network charges apply):

sudo perl -pi.orig -e "s/$oldarchive/$newarchive/" /etc/apt/sources.list

This command saves a copy of the original file in /etc/apt/sources.list.orig so that you can copy it back when the outage is over.

Alternatively, you could switch to the Ubuntu mirror in EC2 run by RightScale:

sudo perl -pi.orig -e "s/$oldarchive/$newarchive/" /etc/apt/sources.list

Note that RightScale does not mirror the source packages, so you might want to comment out the deb-src lines:

sudo perl -pi -e 's/^(deb-src)/#$1/' /etc/apt/sources.list

The Ubuntu images for EC2 that I publish on http://alestic.com use the RightScale Ubuntu mirrors by default and are not affected by the current outage.

Persistent storage on Amazon EC2 is accomplished through the use of Elastic Block Store (EBS) volumes. EBS is basically a storage area network (SAN) and can be thought of as an on-demand, virtual, redundant hard drive plugged in to the server with super-powers like snapshot/restore.

An EBS volume can be detached from one EC2 instance and attached to another. You can create a snapshot of an EBS volume and create new volumes from the snapshot to attach to other instances. Though this flexibility provides some useful abilities, it also presents some challenges.

In particular, the files stored on the EBS volume will be owned by specific numeric UIDs (users) and GIDs (groups). When you fire up and configure a new instance, the UIDs and GIDs on the EBS volume may not exactly match the numeric ids of the users and groups on the new instance, depending on how you set it up.

For example, when you install the MySQL software, the package will generally create a new “mysql” user with the next available UID. If you don’t create the various users in exactly the same order on new instances, you may end up with your database files owned by the “postfix” user instead of the “mysql” user. It’s happened to me and I’m not the only one.

There is a discussion about this topic on the ec2ubuntu Google Group and it has also been raised on Canonical’s EC2 beta mailing list.

Here are some of the different approaches to avoiding or fixing this problem:

  1. Bundle your own AMIs and always run instances of the same AMI when attaching EBS volumes with files. This works if you already have to bundle your AMIs for other reasons, but I often recommend against AMI rebundling because of the efforts involved, lack of reproducibility, and maintenance problems when the base image gets updated or has bugs fixed.

  2. Automate the creation of users and installation of packages in exactly the same order every time. This is likely to give you the same UID/GID values for each user, but it starts to get messy if you end up with an order mixing human users and software package users:

  3. Create all users/groups with hardcoded UIDs/GIDs before installing software packages. If you automate the creation of users and groups you can force the “mysql” and “postfix” users to have a specific UID value. Then you install the MySQL and Postfix packages and the software will use the users which already exist on the system. We ended up following this approach with our EC2 servers at CampusExplorer.com

  4. Correct the ownership of files after mounting the EBS volume. This feels a bit messy to me, but it might be the only option in some cases. I must admit that I’ve done this manually a number of times, but only after finding problems like MySQL not starting because the files aren’t owned by the correct user. For example, say you needed to change files currently owned by “postfix” to be correclty owned by “mysql”:

    find /vol -user postfix -print0 | xargs -0 chown mysql

    If you are changing ownership of files after mounting the EBS volume, make sure you do it in an order which does not lose information. For example, if you have to swap “postfix” and “mysql” users, you’ll need to use a temporary third UID as a placeholder.

  5. On the ec2ubuntu Google group it was suggested that a central authority might be a way to solve the problem. I’ve never used this approach on Linux and am not sure how much work it would be setting up a reliable service like this on EC2.

No matter what approach you use, it might be a good idea to add in some checks after you mount an EBS volume to make sure that the files are owned by the appropriate users. For example, you might verify that the mysql directory is owned by the mysql user

Solving this problem is something that I have only begun to work on, so I would appreciate any comments, pointers, and solutions that you may have.

Dmitriy Samovskiy discovered that the startup time of an EC2 instance (not the latest boot time) is hidden in the “Last-Modified” header of the EC2 meta-data response. You can only query this from the instance itself, but this should perform better than querying the EC2 API, especially if you tend to use Amazon’s Java command line tools.

For example:

HEAD  | 
  egrep ^Last-Modified: | cut -f2- -d' '

Dmitriy has published a short bash script to calculate the instance run time using this trick:


As he points out, this is not documented by AWS, so be careful assuming it will always behave this way.

user-data Scripts

The Ubuntu and Debian EC2 images published on http://alestic.com allow you to send in a startup script using the EC2 user-data parameter when you run a new instance. This functionality is useful for automating the installation and configuration of software on EC2 instances.

The basic rule followed by the image is:

If the instance user-data starts with the two characters #! then the instance runs it as the root user on the first boot.

The “user-data script” is run late in the startup process, so you can assume that networking and other system services are functional.

If you start an EC2 instance with any user-data which does not start with #! the image simply ignores it and allows your own software to access and use the data as it sees fit.

This same user-data startup script functionality has been copied in the Ubuntu images published by Canonical, and your existing user-data script should be portable across images with little change. Read a comparison of the Alestic and Canonical EC2 images.


Here is a sample user-data script which sets up an Ubuntu LAMP server on a new EC2 instance:

set -e -x
export DEBIAN_FRONTEND=noninteractive
apt-get update && apt-get upgrade -y
tasksel install lamp-server
echo "Please remember to set the MySQL root password!"

Save this to a file named, say, install-lamp and then pass it to a new EC2 instance, say, Ubuntu 9.04 Jaunty:

ec2-run-instances --key KEYPAIR --user-data-file install-lamp ami-bf5eb9d6

Please see http://alestic.com for the latest AMI ids for Ubuntu and Debian.

Note: This simplistic user-data script is for demonstration purposes only. Though it does set up a fully functional LAMP server which may be as good as some public LAMP AMIs, it does not take into account important design issues like database persistence. Read Running MySQL on Amazon EC2 with Elastic Block Store.


Since you are passing code to the new EC2 instance, there is a very small chance that you may have made a mistake in writing the software. Well maybe not you, but somebody else out there might not be perfect, so I have to write this for them.

The stdout and stderr of your user-data script is output in /var/log/syslog and you can review this for any success and failure messages. It will contain both things you echo directly in the script as well as output from programs you run.

Tip: If you add set -x at the top of a bash script, then it will output every command executed. If you add set -e to the script, then the user-data script will exit on the first command which does not succeed. These help you quickly identify where problems might have started.


Amazon EC2 limits the size of user-data to 16KB. If your startup instructions are larger than this limit, you can write a user-data script which downloads the full program(s) from somewhere else like S3 and runs them.

Though a shell is a handy tool for writing scripts to install and configure software, the user-data script can be written in any language which supports the shabang (#!) mechanism for running programs. This includes bash, Perl, Python, Ruby, tcl, awk, sed, vim, make, or any other language you can find pre-installed on the image.

If you want to use another language, a user-data script written in bash could install the language, install the program, and then run it.


Setting up a new EC2 instance often requires installing private information like EC2 keys and certificates (e.g., to make AWS API calls). You should be aware that if you pass secrets in the user-data parameter, the complete input is available to any user or process running on the instance.

There is no way to change the instance user-data after instance startup, so anybody who has access to the instance can simply request

Depending on what software you install on your instance, even Internet users may be able to exploit holes to get at your user-data. For example, if your web server lets users specify a URL to upload a file, they might be able to enter the above URL and then read the contents.


Though user-data scripts are my favorite method to set up EC2 instances, it’s not always the appropriate approach. Alternatives include:

  1. Manually ssh in to the instance and enter commands to install and configure software.

  2. Automatically ssh in to the instance with automated commands to install and configure software.

  3. Install and configure software using (1) or (2) and then rebundle the instance to create a new AMI. Use the new image when running instances.

  4. Build your own EC2 images from scratch.

The ssh options have the benefit of not putting any private information into the user-data accessible from the instance. They have the disadvantage of needing to monitor new instances waiting for the ssh server to accept connections; this complicates the startup process compared to user-data scripts.

The rebundled AMI approach and building your own AMI approach are useful when the installation and configuration of your required software take a very long time or can’t be done with automated processes (less common than you might think). A big drawback of creating your own AMIs is maintaining them, keeping up with security patches and other enhancements and fixes which might be applied by the base image maintainers.


Note to AMI authors: If you wish to add to your EC2 images the same ability to run user-data scripts, feel free to include the following code and make it run on image startup:



Thanks to RightScale for the original idea of EC2 images with user-data startup hooks. RightScale has advanced startup plugins which include scripts, software packages, and attachments, all of which integrate with the RightScale service.

Thanks to Kim Scheibel and Jorge Oliveira who submitted code used in the original ec2-run-user-data script.

What do you use EC2 user-data for?

Ubuntu AMIs

Ubuntu AMIs for EC2: