October 2010 Archives

Update: Since this article was written, Amazon has released the ability to copy EBS boot AMIs between regions using the web console, command line, and API. You may still find information of use in this article, but Amazon has solved some of the harder parts for you.

Using Amazon EC2, you created an EBS boot AMI and it’s working fine, but now you want to run instances of that AMI in a different EC2 region. Since AMIs are region specific, you need a copy of the image in each region where instances are required.

This article presents one method you can use to copy an EBS boot from one EC2 region to another.

Setup

Define the region from which we are copying the EBS boot AMI (source) and the region to which we are copying (target). Define the EBS boot AMI that we are copying from the source region.

We also need to determine which ids to use in the target region for the AKI (kernel image) and ARI (ramdisk image). These must correspond to the AKI and ARI in the source region or the new AMI may not work correctly. This is probably the trickiest step of the process and one which is not trivial to automate for the general case.

source_region=us-west-1  # replace with your regions
target_region=eu-west-1
source_ami=[AMI_ID_TO_COPY]
target_aki=[AKI_ID]
target_ari=[ARI_ID]

To make things easier, we’ll upload our own ssh public key to both regions. We could also do this with ssh keys generated by EC2, but that it is slightly more complex as EC2 generates unique keys for each region.

ssh_key_file=$HOME/.ssh/id_rsa
tmp_keypair=copy-ami-keypair-$$
ec2-import-keypair --region $source_region    --public-key-file $ssh_key_file.pub $tmp_keypair
ec2-import-keypair --region $target_region    --public-key-file $ssh_key_file.pub $tmp_keypair

Find the Ubuntu 10.04 LTS Lucid AMI in each of our regions of interest using the REST API provided by Ubuntu. Pick up some required information about the EBS boot AMI we are going to copy.

instance_type=c1.medium
source_run_ami=$(wget -q -O-   http://uec-images.ubuntu.com/query/lucid/server/released.current.txt |
  egrep "server.release.*ebs.i386.$source_region" | cut -f8)
target_run_ami=$(wget -q -O-   http://uec-images.ubuntu.com/query/lucid/server/released.current.txt |
  egrep "server.release.*ebs.i386.$target_region" | cut -f8)
architecture=$(ec2-describe-images --region $source_region   $source_ami | egrep ^IMAGE | cut -f8)
ami_name=$(ec2-describe-images --region $source_region   $source_ami | egrep ^IMAGE | cut -f3 | cut -f2 -d/)
source_snapshot=$(ec2-describe-images --region $source_region   $source_ami | egrep ^BLOCKDEVICEMAPPING | cut -f4)
ami_size=$(ec2-describe-snapshots --region $source_region   $source_snapshot | egrep ^SNAPSHOT | cut -f8)

Image Copy

Start an instance in each region. Have EC2 create a new volume from the AMI to copy and attach it to the source instance. Have EC2 create a new, blank volume and attach it to the target instance.

dev=/dev/sdi
xvdev=/dev/sdi # On modern Ubuntu, you will need to use: xvdev=/dev/xvdi
mount=/image

source_instance=$(ec2-run-instances   --region $source_region   --instance-type $instance_type   --key $tmp_keypair   --block-device-mapping $dev=$source_snapshot::true   $source_run_ami |
  egrep ^INSTANCE | cut -f2)

target_instance=$(ec2-run-instances   --region $target_region   --instance-type $instance_type   --key $tmp_keypair   --block-device-mapping $dev=:$ami_size:true   $target_run_ami |
  egrep ^INSTANCE | cut -f2)

while ! ec2-describe-instances --region $source_region   $source_instance | grep -q running; do sleep 1; done
while ! ec2-describe-instances --region $target_region   $target_instance | grep -q running; do sleep 1; done

source_ip=$(ec2-describe-instances --region $source_region   $source_instance | egrep "^INSTANCE" | cut -f17)
target_ip=$(ec2-describe-instances --region $target_region   $target_instance | egrep "^INSTANCE" | cut -f17)
target_volume=$(ec2-describe-instances --region $target_region   $target_instance | egrep "^BLOCKDEVICE.$dev" | cut -f3)

Copy the file system from the EBS volume in the source region to the to the EBS volume in the target region.

ssh -i $ssh_key_file ubuntu@$source_ip "sudo mkdir -m 000 $mount && sudo mount $xvdev $mount"

ssh -i $ssh_key_file ubuntu@$target_ip "sudo mkfs.ext3 -F -L cloudimg-rootfs $xvdev &&
 sudo mkdir -m 000 $mount && sudo mount $xvdev $mount"

ssh -A -i $ssh_key_file ubuntu@$source_ip "sudo -E rsync    -PazSHAX    --rsh='ssh -o \"StrictHostKeyChecking no\"'    --rsync-path 'sudo rsync'    $mount/    ubuntu@$target_ip:$mount/"

ssh -i $ssh_key_file ubuntu@$target_ip "sudo umount $mount"

The cloudimg-rootfs file system label is required to boot correctly on EC2. It can be left off for other distributions.

AMI Creation

Snapshot the target EBS volume and register it as a new AMI in the target region. If the source AMI included parameters like block device mappings for ephemeral storage, then add these options to the ec2-register command.

target_snapshot=$(ec2-create-snapshot --region $target_region   $target_volume | egrep ^SNAPSHOT | cut -f2)

target_ami=$(ec2-register   --region $target_region   --snapshot $target_snapshot   --architecture $architecture   --name "$ami_name"   --kernel $target_aki   --ramdisk $target_ari |
  cut -f2)

echo "Make a note of the new AMI id in $target_region: $target_ami"

Make a note of the new AMI id.

Cleanup

Terminate the EC2 instances that were used to copy the AMI. Since we let EC2 create the EBS volumes on instance run, EC2 will automatically delete those volumes when the instances terminate. Delete the temporary keypairs we used to access the instances. Clean up the temporary files we created on the local system.

ec2-terminate-instances --region $source_region $source_instance
ec2-terminate-instances --region $target_region $target_instance

ec2-delete-keypair --region $source_region $tmp_keypair
ec2-delete-keypair --region $target_region $tmp_keypair

rm $tmp_private_key $user_data

OPTIONAL: If you don’t want to keep the AMI you created, you can remove it with commands like:

ec2-deregister --region $target_region $target_ami
ec2-delete-snapshot --region $target_region $target_snapshot

Other People’s AMIs

In order to follow this procedure you need to have read access to the snapshot associated with the source AMI, which generally means it must be an EBS boot AMI that you created and registered.

If you want to copy a public EBS boot AMI that somebody else created and you don’t have read access to the EBS snapshot for that AMI (the common case) then you can’t create a volume directly from the snapshot.

However, you should be able to play a little trick where you run an instance of that AMI and immediately stop the instance. Detach the EBS root volume from that instance, and attach it to another instance to perform the copy as above.

The new AMI might not be exactly the same if it had a chance to start the actual boot processes, but it should be pretty close.

Software: migrate-ebs-image

Lincoln Stein has written a program named migrate-ebs-image.pl which migrates an EBS boot AMI from one region to another using the above approach. You can read about it here:

http://search.cpan.org/~lds/VM-EC2/bin/migrate-ebs-image.pl

Cost

The AWS fees for copying an EBS boot AMI following the instructions in this article will include a couple hours of instance time, a small amount for the temporary EBS volumes, and some EBS I/O request charges. You will also be charged a nominal amount each month for the S3 storage of the EBS snapshot for the new AMI in the target region.

[Update 2010-11-01: Added uec-rootfs label for Ubuntu 10.10 thanks to Scott Moser]
[Update 2011-11-30: Updated to use /dev/xvdi for modern Ubuntu.]
[Update 2012-06-25: Replaced uec-rootfs with cloudimg-rootfs for Ubuntu 11.04 and up]
[Update 2012-07-30: Added note about Lincoln Stein’s migrate-ebs-image software]

Ubuntu 9.04 Jaunty End Of Life

| 0 Comments

Ubuntu 9.04 Jaunty has reached EOL (End Of Life). It is no longer supported by Ubuntu with security updates and patches. You have known this day was coming for 1.5 years, as all non-LTS Ubuntu releases are supported for only 18 months.

I have no plans to delete the Ubuntu 9.04 Jaunty AMIs for EC2 published under the Alestic name in the foreseeable future, but I request, recommend, and urge you to please stop using them and upgrade to an officially supported, active, kernel-consistent release of Ubuntu on EC2 like 10.04 LTS Lucid or 10.10 Maverick.

I have removed Jaunty from the list of public Ubuntu AMIs at the top of Alestic.com and you can see that we are getting that much closer to having all the main, active Ubuntu AMIs created by and officially supported by Canonical (and not me). Yay!

If you are running on EC2 with Jaunty, I believe the cleanest way to upgrade to Lucid or Maverick would be to start a new instance (preferably EBS boot) with the latest AMI from Canonical, apply your software installation and configuration changes, and copy your data over from the old instance. Since Jaunty was never officially supported by Canonical on EC2, it was using non-standard kernels and init software, so an in-place upgrade is not likely to go smoothly.

Don’t terminate your old instances until you’re sure you have everything functioning on the new ones. EC2 is fantastic for use-cases like this where you can double your hardware temporarily for low cost.

Farewell Jaunty, you served us well.

Scott Moser and I were wondering if stopping and starting an EBS boot instance on EC2 would begin a new hour’s worth of charges or if AWS would not increase your costs if the stop/start were done a few minutes apart in the same hour.

For some reason, I had assumed that it would start a new hour of fees, possibly because of my experience with the somewhat unrelated terminating old instances and starting new instances. However, we decided it would be easy to test, so here are the results.

I tested with an Ubuntu 10.10 Maverick 32-bit server EBS boot AMI on the m1.small instance type in the ap-southeast-1 (Singapore) region. The AMI should have no effect on charges, so these results should apply to any OS you run on EC2.

I used an AWS account that did not have any EC2 instance fees in the Singapore region that month (Scott’s idea) so that this activity would be easy to see as the only charges on that account.

$ ec2-run-instances --region ap-southeast-1 --key KEYPAIR ami-6a136d38
RESERVATION r-48370d1a  063491364108    default
INSTANCE    i-908782c2  ami-6a136d38            pending KEYPAIR 0   m1.small    2010-10-14T16:38:17+0000    ap-southeast-1a aki-13d5aa41    monitoring-disabled                 ebs         paravirtual 
$ ec2-stop-instances --region ap-southeast-1 i-908782c2
INSTANCE    i-908782c2  running stopping

$ ec2-start-instances --region ap-southeast-1 i-908782c2
INSTANCE    i-908782c2  stopped pending
$ ec2-stop-instances --region ap-southeast-1 i-908782c2
INSTANCE    i-908782c2  running stopping

$ ec2-start-instances --region ap-southeast-1 i-908782c2
INSTANCE    i-908782c2  stopped pending
$ ec2-stop-instances --region ap-southeast-1 i-908782c2
INSTANCE    i-908782c2  running stopping

$ ec2-start-instances --region ap-southeast-1 i-908782c2
INSTANCE    i-908782c2  stopped pending
$ ec2-terminate-instances --region ap-southeast-1 i-908782c2
INSTANCE    i-908782c2  running shutting-down

You can see that I had four begin/end sessions where the instance was running; all of these took place in a single real time hour. Before stopping, I waited each time for the instance to move to running, and I waited each time for the instance to move to stopped before starting it again.

I then waited for a while (hours) for the results of this activity to show up on the AWS account activity page.

activity snapshot

If Amazon considered all of this activity to be taking place by a single instance in a single hour real time hour then the charge would have been $0.095 (m1.small in the ap-east-1 region). However, Amazon considers each of these run sessions to be starting a new hour of charges, so I was charged for 4 hours of instance time at $0.38.

In general, this additional charge is not a big deal. After all, for the smaller instance types we are only talking pennies or dimes for an hour of instance time. However, if you set up some sort of system where you are constantly stopping and starting instances within a single hour of wall clock time, then the effects of this policy could multiply your costs. You might want to investigate ways to keep your instances running more continuously and only stopping them if the anticipated down time is more than an hour.

Update: After I ran this test I found that Shlomo Swidler answered this question on the EC2 forum, but it’s still fun to prove it.

Amazon recently launched the ability to upload your own ssh public key to EC2 so that it can be passed to new instances when they are launched. Prior to this you always had to use an ssh keypair that was generated by Amazon.

The benefits of using your own ssh key include:

  • Amazon never sees the private part of the ssh key (though they promise they do not save a copy after you downloaded it and we all trust them with this)

  • The private part of the ssh key is never transmitted over the network (though it always goes over an encrypted connection and we mostly trust this)

  • You can now upload the same public ssh key to all EC2 regions, so you no longer have to keep track of a separate ssh key for each region.

  • You can use your default personal ssh key with brand new EC2 instances, so you no longer have to remember to specify options like -i EC2KEYPAIR in every ssh, scp, rsync command.

If you haven’t yet created an ssh key for your local system, it can be done with the command:

ssh-keygen

You can accept the default file locations, and I recommend using a secure passphrase to keep the key safe.

Here are some sample commands that will upload to all existing regions your personal ssh public key from the default file location on Ubuntu, giving it an EC2 keypair name of your current username. Adjust to suit your preferences:

keypair=$USER  # or some name that is meaningful to you
publickeyfile=$HOME/.ssh/id_rsa.pub
regions=$(ec2-describe-regions | cut -f2)

for region in $regions; do
  echo $region
  ec2-import-keypair --region $region --public-key-file $publickeyfile $keypair
done

When you start new instances, you can now specify this new keypair name and EC2 will provide the previously uploaded public ssh key to the instance, allowing you to ssh in. For example:

ec2-run-instances --key $USER ami-508c7839
[...]
ec2-describe-instances i-88eb15e5
[...]
ssh ubuntu@ec2-184-73-107-172.compute-1.amazonaws.com

Don’t forget to terminate the instance if you started one to test this.

[Update]

Based on a Twitter question, I tested uploading a DSA public ssh key (instead of RSA) and got this error from Amazon:

Client.InvalidKeyPair.Format: Invalid DER encoded key material

I don’t see why DSA would not work since it’s just a blurb of text being stored by EC2 and passed to the instance to add to $HOME/.ssh/authorized_keys but there you have it.

Does anybody really need me to tell them that you can now run a copy of the newly released Ubuntu 10.10 Maverick on Amazon EC2 with official AMIs published by Canonical?

Or, by now, perhaps you have come to expect—like I have—that the smoothly oiled machine will naturally pump out Ubuntu AMIs for EC2 on the same pre-scheduled date that the larger Ubuntu machine churns out yet another smooth launch of yet another clean Ubuntu release.

The bigger question, I guess, might be:

Should I upgrade to Ubuntu 10.10 on EC2?

The first release after an LTS (Ubuntu 10.04 Lucid) is always tough choice for me.

If I’m on a desktop, I like to upgrade to the latest a month or so after each semi-annual release. However, upgrading servers every six months tends to get tiresome, so I generally stick with the LTS until a newer release contains a software version that I need to use, or until the next LTS comes out two years later.

There have been a few minor problems with Ubuntu 10.04 Lucid on EC2, but the important ones for me have either been fixed with the release of updated AMIs or have simple workarounds. Other than that, I am pleased with Lucid on EC2 and don’t see an urgent need to upgrade beyond this LTS just yet.

I’d love to hear what you think.

ec2-consistent-snapshot version 0.35 has been released on the Alestic PPA. This software is a wrapper around the EBS create-snapshot API call and can be used to help ensure that the file system and any MySQL database on the EBS volume are in a consistent state, suitable for restoring at a later time.

The most important change in this release is a fix for a defect that has been nagging many folks for months. In rare situations, the create-snapshot API call itself took longer than 10 seconds to return from the EC2 web service at Amazon. The software did not trap the alarm correctly and exited without unfreezing the XFS file system which forced us to add an awkward unfreeze command in all cron jobs.

Thanks to Mike Lawlor who tracked down and provided a patch for this bothersome bug.

Thanks also to Brian Smith, who provided a patch implementing an new --mysql-socket option that a few users had been wishing for.

And thanks to Kenny Gryp, who provided a patch to clean up an error message when both the --help and --xfs-filesystem options were specified.

To get the latest version of ec2-consistent-snapshot installed, then you should be able to upgrade from the Alestic PPA with:

sudo add-apt-repository ppa:alestic &&
sudo apt-get update &&
sudo apt-get install ec2-consistent-snapshot

New bugs in this software can be reported in launchpad at:

https://bugs.launchpad.net/ec2-consistent-snapshot

Ubuntu AMIs

Ubuntu AMIs for EC2:


AWS Jobs

AWS Jobs