Update: Since this article was written, Amazon has released the ability to copy EBS boot AMIs between regions using the web console, command line, and API. You may still find information of use in this article, but Amazon has solved some of the harder parts for you.
Using Amazon EC2, you created an EBS boot AMI and it’s working fine, but now you want to run instances of that AMI in a different EC2 region. Since AMIs are region specific, you need a copy of the image in each region where instances are required.
This article presents one method you can use to copy an EBS boot from one EC2 region to another.
Define the region from which we are copying the EBS boot AMI (source) and the region to which we are copying (target). Define the EBS boot AMI that we are copying from the source region.
We also need to determine which ids to use in the target region for the AKI (kernel image) and ARI (ramdisk image). These must correspond to the AKI and ARI in the source region or the new AMI may not work correctly. This is probably the trickiest step of the process and one which is not trivial to automate for the general case.
source_region=us-west-1 # replace with your regions target_region=eu-west-1 source_ami=[AMI_ID_TO_COPY] target_aki=[AKI_ID] target_ari=[ARI_ID]
To make things easier, we’ll upload our own ssh public key to both regions. We could also do this with ssh keys generated by EC2, but that it is slightly more complex as EC2 generates unique keys for each region.
ssh_key_file=$HOME/.ssh/id_rsa tmp_keypair=copy-ami-keypair-$$ ec2-import-keypair --region $source_region \ --public-key-file $ssh_key_file.pub $tmp_keypair ec2-import-keypair --region $target_region \ --public-key-file $ssh_key_file.pub $tmp_keypair
Find the Ubuntu 10.04 LTS Lucid AMI in each of our regions of interest using the REST API provided by Ubuntu. Pick up some required information about the EBS boot AMI we are going to copy.
instance_type=c1.medium source_run_ami=$(wget -q -O- \ http://uec-images.ubuntu.com/query/lucid/server/released.current.txt | egrep "server.release.*ebs.i386.$source_region" | cut -f8) target_run_ami=$(wget -q -O- \ http://uec-images.ubuntu.com/query/lucid/server/released.current.txt | egrep "server.release.*ebs.i386.$target_region" | cut -f8) architecture=$(ec2-describe-images --region $source_region \ $source_ami | egrep ^IMAGE | cut -f8) ami_name=$(ec2-describe-images --region $source_region \ $source_ami | egrep ^IMAGE | cut -f3 | cut -f2 -d/) source_snapshot=$(ec2-describe-images --region $source_region \ $source_ami | egrep ^BLOCKDEVICEMAPPING | cut -f4) ami_size=$(ec2-describe-snapshots --region $source_region \ $source_snapshot | egrep ^SNAPSHOT | cut -f8)
Start an instance in each region. Have EC2 create a new volume from the AMI to copy and attach it to the source instance. Have EC2 create a new, blank volume and attach it to the target instance.
dev=/dev/sdi xvdev=/dev/sdi # On modern Ubuntu, you will need to use: xvdev=/dev/xvdi mount=/image source_instance=$(ec2-run-instances \ --region $source_region \ --instance-type $instance_type \ --key $tmp_keypair \ --block-device-mapping $dev=$source_snapshot::true \ $source_run_ami | egrep ^INSTANCE | cut -f2) target_instance=$(ec2-run-instances \ --region $target_region \ --instance-type $instance_type \ --key $tmp_keypair \ --block-device-mapping $dev=:$ami_size:true \ $target_run_ami | egrep ^INSTANCE | cut -f2) while ! ec2-describe-instances --region $source_region \ $source_instance | grep -q running; do sleep 1; done while ! ec2-describe-instances --region $target_region \ $target_instance | grep -q running; do sleep 1; done source_ip=$(ec2-describe-instances --region $source_region \ $source_instance | egrep "^INSTANCE" | cut -f17) target_ip=$(ec2-describe-instances --region $target_region \ $target_instance | egrep "^INSTANCE" | cut -f17) target_volume=$(ec2-describe-instances --region $target_region \ $target_instance | egrep "^BLOCKDEVICE.$dev" | cut -f3)
Copy the file system from the EBS volume in the source region to the to the EBS volume in the target region.
ssh -i $ssh_key_file ubuntu@$source_ip \ "sudo mkdir -m 000 $mount && sudo mount $xvdev $mount" ssh -i $ssh_key_file ubuntu@$target_ip \ "sudo mkfs.ext3 -F -L cloudimg-rootfs $xvdev && sudo mkdir -m 000 $mount && sudo mount $xvdev $mount" ssh -A -i $ssh_key_file ubuntu@$source_ip \ "sudo -E rsync \ -PazSHAX \ --rsh='ssh -o \"StrictHostKeyChecking no\"' \ --rsync-path 'sudo rsync' \ $mount/ \ ubuntu@$target_ip:$mount/" ssh -i $ssh_key_file ubuntu@$target_ip \ "sudo umount $mount"
cloudimg-rootfs file system label is required to boot correctly on EC2. It can be left off for other distributions.
Snapshot the target EBS volume and register it as a new AMI in the target region. If the source AMI included parameters like block device mappings for ephemeral storage, then add these options to the ec2-register command.
target_snapshot=$(ec2-create-snapshot --region $target_region \ $target_volume | egrep ^SNAPSHOT | cut -f2) target_ami=$(ec2-register \ --region $target_region \ --snapshot $target_snapshot \ --architecture $architecture \ --name "$ami_name" \ --kernel $target_aki \ --ramdisk $target_ari | cut -f2) echo "Make a note of the new AMI id in $target_region: $target_ami"
Make a note of the new AMI id.
Terminate the EC2 instances that were used to copy the AMI. Since we let EC2 create the EBS volumes on instance run, EC2 will automatically delete those volumes when the instances terminate. Delete the temporary keypairs we used to access the instances. Clean up the temporary files we created on the local system.
ec2-terminate-instances --region $source_region $source_instance ec2-terminate-instances --region $target_region $target_instance ec2-delete-keypair --region $source_region $tmp_keypair ec2-delete-keypair --region $target_region $tmp_keypair rm $tmp_private_key $user_data
OPTIONAL: If you don’t want to keep the AMI you created, you can remove it with commands like:
ec2-deregister --region $target_region $target_ami ec2-delete-snapshot --region $target_region $target_snapshot
Other People’s AMIs
In order to follow this procedure you need to have read access to the snapshot associated with the source AMI, which generally means it must be an EBS boot AMI that you created and registered.
If you want to copy a public EBS boot AMI that somebody else created and you don’t have read access to the EBS snapshot for that AMI (the common case) then you can’t create a volume directly from the snapshot.
However, you should be able to play a little trick where you run an instance of that AMI and immediately stop the instance. Detach the EBS root volume from that instance, and attach it to another instance to perform the copy as above.
The new AMI might not be exactly the same if it had a chance to start the actual boot processes, but it should be pretty close.
Lincoln Stein has written a program named migrate-ebs-image.pl which migrates an EBS boot AMI from one region to another using the above approach. You can read about it here:
The AWS fees for copying an EBS boot AMI following the instructions in this article will include a couple hours of instance time, a small amount for the temporary EBS volumes, and some EBS I/O request charges. You will also be charged a nominal amount each month for the S3 storage of the EBS snapshot for the new AMI in the target region.
[Update 2010-11-01: Added
uec-rootfs label for Ubuntu 10.10 thanks to Scott Moser]
[Update 2011-11-30: Updated to use /dev/xvdi for modern Ubuntu.]
[Update 2012-06-25: Replaced
cloudimg-rootfs for Ubuntu 11.04 and up]
[Update 2012-07-30: Added note about Lincoln Stein’s migrate-ebs-image software]