Ubuntu 10.04 Lucid Released for Amazon EC2

Ubuntu 10.04 Lucid was released on Amazon EC2 right on schedule today along with the rest of the normal Ubuntu image download channels.

Congrats to all the Ubuntu folks who were involved in putting this together! Though I’ve enjoyed contributing to the cause over the years, I’m happy to say that I had very little involvement with this release on EC2, leaving me more time to focus on my startup and family. Though many individuals are involved, I’d like to acknowledge the hard work of Scott Moser who has taken the point for publishing official Ubuntu EC2 images these last couple releases.

This is also the first Ubuntu release on EC2 that includes officially supported EBS boot AMIs, taking yet another task off my plate and providing a trusted source for this useful image type.

I’ve listed all of the current Ubuntu and Debian AMI ids in the table at the top of Alestic.com. Simply click on the EC2 region where you want to run your instances to find the AMI ids.

If you’re running an older release of Ubuntu on EC2, see my recommended upgrade paths.

Upgrading Servers to Ubuntu 10.04 Lucid on Amazon EC2

Ubuntu 10.04 Lucid is expected to be released this Thursday (April 29, 2010). This is the first LTS (long term support) release since Ubuntu 8.04 Hardy, which has itself been a stable and reliable server platform for years both inside and outside of Amazon EC2.

If you’re starting a new server project on EC2, Ubuntu 10.04 Lucid would be a great release to start with. It will be supported for five years with security patches and major bug fixes.

If you are already running a different Ubuntu release on EC2, you should consider planning an upgrade to Lucid. Here are my thoughts for each release you might currently be running on EC2:

  • Ubuntu 8.04 Hardy should continue to be fairly stable for a while, though there does not appear to be a lot of EC2-specific development being applied to the AMIs, so you may want to upgrade to Lucid LTS at your leisure. You will probably want to start new instances, though with Scott’s comment below, it might be possible to upgrade in-place on EBS boot from Hardy to Lucid. If you happen to still be running the Alestic Hardy AMIs, you should at least consider upgrading to the Hardy AMIs published by Canonical and listed at the top of Alestic.com.

  • Ubuntu 8.10 Intrepid has reached its end of life and should no longer be used on EC2. If you are still running Intrepid, you should immediately start upgrading to Karmic or Intrepid through starting new instances.

  • Ubuntu 9.04 Jaunty on EC2 is still running on an old Fedora 8 kernel and is not supported by Ubuntu. I’ve been recommending an upgrade from Jaunty for a while. I continue to recommend upgrading to Karmic (if you want something that has been tested in production for a while) or Lucid (once you’re comfortable with the release).

  • Ubuntu 9.10 Karmic still works great on EC2, though there are a couple bothersome bugs (like hostname reset on reboot) that should be fixed by upgrading to Lucid. If you are running an EBS boot Karmic instance, then it is possible to upgrade in place, but it’s also fine to simply start new instances.

  • Ubuntu 10.04 Lucid pre-release instances should be upgraded to the released version. If you are running EBS boot Lucid, then it is possible to upgrade in place even if there are a newer kernel and ramdisk.

Scott Moser of Canonical has laid out the steps necessary to upgrade an EC2 instance in place without having to start a new EC2 instance. This only works for instances that are currently running an EBS boot pre-release Lucid AMI or an EBS boot 9.10 Karmic AMI. All other servers will need to be upgraded by starting new Lucid instances to replace the old instances. Fortunately, EC2 makes that easy.

In fact, EC2 makes starting new replacement instances so convenient, I would recommend following that procedure in most cases even if you think you can upgrade a server in place. If you encounter any troubles in getting your service to run on Lucid, running a new instance makes it straight forward to fall back to the old server. And, once you are happy with the new server, make it your production system, terminate the old one and don’t look back.

Welcome to the new world of cloud computing where you can instantly acquire and discard server hardware for mere dollars without worrying about large investments, complicated provisioning, and long term support costs.

I will add the new Lucid AMI ids to the table at the top of Alestic.com as soon as I get word that they are ready.

Identifying When a New EBS Volume Has Completed Initialization From an EBS Snapshot

On Amazon EC2, you can create a new EBS volume from an EBS snapshot using a command like

aws ec2 create-volume \
  --availability-zone us-east-1a \
  --snapshot-id snap-d53484bc

This returns almost immediately with a volume id which you can attach and mount on an EC2 instance. The EBS documentation describes the magic behind the immediate availability of the data on the volume:

“New volumes created from existing Amazon S3 snapshots load lazily in the background. This means that once a volume is created from a snapshot, there is no need to wait for all of the data to transfer from Amazon S3 to your Amazon EBS volume before your attached instance can start accessing the volume and all of its data. If your instance accesses a piece of data which hasn’t yet been loaded, the volume will immediately download the requested data from Amazon S3, and then will continue loading the rest of the volume’s data in the background."

This is a cool feature which allows you to start using the volume quickly without waiting for the blocks to be completely populated. In practice, however, there is a period of time where you experience high iowait when your application accesses disk blocks that need to be loaded. This manifests as somewhat to extreme slowness for minutes to hours after the creation of the EBS volume.

For some applications (mine) the initial slowness is acceptable, knowing that it will eventually pick up and perform with the normal EBS access speed once all blocks have been populated. As Clint Popetz pointed out on the ec2ubuntu group, other applications might need to know when the EBS volume has been populated and is ready for high performance usage.

Though there is no API status to poll or trigger to notify you when all the blocks on the EBS volume have been populated, it occurred to me that there was a method which could be used to guarantee that you are waiting until the EBS volume is ready: simply request all of the blocks from the volume and throw them away.

Here’s a simple command which reads all of the blocks on an EBS volume attached to device /dev/xvdX or /dev/sdX (substitute your device name) and does nothing with them:

sudo dd if=/dev/xvdX of=/dev/null bs=10M

OR

sudo dd if=/dev/sdX of=/dev/null bs=10M

By the time this command is done, you know that all of the data blocks have been copied from the EBS snapshot in S3 to the EBS volume. At this point you should be able to start using the EBS volume in your application without the high iowait you would have experienced if you started earlier. Early reports from Clint are that this method works in practice.

[Update 2012-04-02: We now have to support device names like sda1 and xvda1]

[Update 2019-08-07: New aws-cli command format]

New releases of Ubuntu and Debian Images for Amazon EC2 (20100319)

Note: I do not recommend that new users start with these AMIs. These AMIs run with older versions of Amazon’s Fedora 8 kernel which have some incompatibilities with Ubuntu and Debian (e.g., XFS is broken). My strong recommendation is for all users to convert to one of the Ubuntu 9.10 Karmic or Ubuntu 8.04 Hardy AMIs which run with a modern, compatible Ubuntu kernel.

I have released updates of the following legacy Ubuntu and Debian S3-based AMIs published in the “alestic” buckets:

  • Ubuntu: 8.04 Hardy, 8.10 Intrepid, 9.04 Jaunty
  • Debian: 4.0 Etch, 5.0 Lenny

I welcome testing and feedback from folks who are already using the older versions of these AMIs.

In addition to upgraded software packages, the following enhancements are in this release:

  • Wait for meta-data before deciding whether to generate ssh host key (Thanks to Dmitry for catching)

  • Patch from Tom White to support compressed user-data scripts

  • Allow user-data script to remove the “been run” file so that it is run on every boot instead of just the first

  • Upgrade EC2 AMI tools to 1.3-34544

  • Debian: Add Alestic PPA to apt sources

  • Debian: Install runurl from Alestic PPA

I would like to reiterate that these AMIs are not recommended for anybody except folks who are already using older versions of these AMIs and I would encourage you to upgrade to an Ubuntu 9.10 Karmic AMI or the upcoming Ubuntu 10.04 Lucid AMI which will be released in April. The base Ubuntu image building responsibilities have been transfered to Canonical and I don’t have any plans to release new AMIs in the older Ubuntu or Debian series beyond what I am announcing here.

I have not yet posted the AMI ids on https://alestic.com pending testing and feedback, but here are the ids for those who need to run them:

us-east-1

ami-1116f978 Ubuntu 9.04 Jaunty   server  32-bit
ami-e116f988 Ubuntu 9.04 Jaunty   server  64-bit
ami-cd16f9a4 Ubuntu 9.04 Jaunty   desktop 32-bit
ami-c316f9aa Ubuntu 9.04 Jaunty   desktop 64-bit

ami-1316f97a Ubuntu 8.10 Intrepid server  32-bit
ami-e316f98a Ubuntu 8.10 Intrepid server  64-bit
ami-cb16f9a2 Ubuntu 8.10 Intrepid desktop 32-bit
ami-c116f9a8 Ubuntu 8.10 Intrepid desktop 64-bit

ami-e916f980 Ubuntu 8.04 Hardy    server  32-bit
ami-e716f98e Ubuntu 8.04 Hardy    server  64-bit
ami-1716f97e Ubuntu 8.04 Hardy    desktop 32-bit
ami-e516f98c Ubuntu 8.04 Hardy    desktop 64-bit

ami-eb16f982 Ubuntu 6.06 Dapper   server  32-bit
ami-f916f990 Ubuntu 6.06 Dapper   server  64-bit

ami-ed16f984 Debian 5.0 Lenny     server  32-bit
ami-fb16f992 Debian 5.0 Lenny     server  64-bit
ami-cf16f9a6 Debian 5.0 Lenny     desktop 32-bit
ami-c516f9ac Debian 5.0 Lenny     desktop 64-bit

ami-ef16f986 Debian 4.0 Etch      server  32-bit
ami-fd16f994 Debian 4.0 Etch      server  64-bit

us-west-1

ami-197a2b5c Ubuntu 9.04 Jaunty   server  32-bit
ami-237a2b66 Ubuntu 9.04 Jaunty   server  64-bit
ami-357a2b70 Ubuntu 9.04 Jaunty   desktop 32-bit
ami-c97a2b8c Ubuntu 9.04 Jaunty   desktop 64-bit

ami-257a2b60 Ubuntu 8.10 Intrepid server  32-bit
ami-2d7a2b68 Ubuntu 8.10 Intrepid server  64-bit
ami-377a2b72 Ubuntu 8.10 Intrepid desktop 32-bit
ami-d57a2b90 Ubuntu 8.10 Intrepid desktop 64-bit

ami-277a2b62 Ubuntu 8.04 Hardy    server  32-bit
ami-2f7a2b6a Ubuntu 8.04 Hardy    server  64-bit
ami-cf7a2b8a Ubuntu 8.04 Hardy    desktop 32-bit
ami-d77a2b92 Ubuntu 8.04 Hardy    desktop 64-bit

ami-217a2b64 Ubuntu 6.06 Dapper   server  32-bit
ami-e77a2ba2 Ubuntu 6.06 Dapper   server  64-bit

ami-d37a2b96 Debian 5.0 Lenny     server  32-bit
ami-df7a2b9a Debian 5.0 Lenny     server  64-bit
ami-e37a2ba6 Debian 5.0 Lenny     desktop 32-bit
ami-db7a2b9e Debian 5.0 Lenny     desktop 64-bit

ami-d17a2b94 Debian 4.0 Etch      server  32-bit
ami-dd7a2b98 Debian 4.0 Etch      server  64-bit

eu-west-1

ami-a798b3d3 Ubuntu 9.04 Jaunty   server  32-bit
ami-af98b3db Ubuntu 9.04 Jaunty   server  64-bit
ami-9798b3e3 Ubuntu 9.04 Jaunty   desktop 32-bit
ami-9d98b3e9 Ubuntu 9.04 Jaunty   desktop 64-bit

ami-a198b3d5 Ubuntu 8.10 Intrepid server  32-bit
ami-a998b3dd Ubuntu 8.10 Intrepid server  64-bit
ami-9198b3e5 Ubuntu 8.10 Intrepid desktop 32-bit
ami-9998b3ed Ubuntu 8.10 Intrepid desktop 64-bit

ami-a398b3d7 Ubuntu 8.04 Hardy    server  32-bit
ami-ab98b3df Ubuntu 8.04 Hardy    server  64-bit
ami-9398b3e7 Ubuntu 8.04 Hardy    desktop 32-bit
ami-8798b3f3 Ubuntu 8.04 Hardy    desktop 64-bit

ami-ad98b3d9 Ubuntu 6.06 Dapper   server  32-bit
ami-9598b3e1 Ubuntu 6.06 Dapper   server  64-bit

ami-8398b3f7 Debian 5.0 Lenny     server  32-bit
ami-8f98b3fb Debian 5.0 Lenny     server  64-bit
ami-8998b3fd Debian 5.0 Lenny     desktop 32-bit
ami-8b98b3ff Debian 5.0 Lenny     desktop 64-bit

ami-8198b3f5 Debian 4.0 Etch      server  32-bit
ami-8d98b3f9 Debian 4.0 Etch      server  64-bit
SCALE 8x Talk Notes: EC2 Beginners Workshop

At SCALE 8x (Southern California Linux Expo, Feb 2010) I did a walkthrough demonstrating how to use the AWS console to run, connect to, and terminate your first Ubuntu server on Amazon EC2.

Though they do not include my talking points or the Q&A discussion during and following the session, you can download my slides and workshop handout:

http://cdn.anvilon.com/20100220-scale/scale8x-ec2.pdf

http://cdn.anvilon.com/20100220-scale/scale8x-steps.pdf

Notes:

  • These slides were not created for use outside of the workshop, so they are not complete in themselves. I am making them available for the workshop participants who may wish to refer to them to refresh their memory on what was discussed.

  • The instructions talk about removing the private ssh key at the end. This would not be done in a normal secure environment.

  • This demo used a temporary Ubuntu AMI which had wikimedia installed. This AMI is not designed for use in production systems as it includes a known wikimedia admin password.

  • This AMI is subject to being deleted at any time without notice. Did I mention you should not use it in a production system?

The BitSource Interview of Eric Hammond (SCALE, EC2)

Matthew Sacks, of The BitSource, made the mistake of asking me some questions about Amazon EC2, so I rambled on far too long and the results are posted on the SCALE blog:

SCALE: Eric Hammond on Deploying Linux on EC2

In addition to some high level concepts, I point out that Ubuntu is, at the moment, the best choice for a Linux distro on EC2 if you want up-to-date images with modern, consistent kernels.

Resizing the Root Disk on a Running EBS Boot EC2 Instance

In a previous article I described how to run an EBS boot AMI with a larger root disk size than the default. That’s fine if you know the size you want before running the instance, but what if you have an EC2 instance already running and you need to increase the size of its root disk without running a different instance?

As long as you are ok with a little down time on the EC2 instance (few minutes), it is possible to change out the root EBS volume with a larger copy, without needing to start a new instance.

Let’s walk through the steps on a sample Ubuntu 16.04 LTS Xenial HVM instance. I tested this with ami-40d28157 but check for the latest AMI ids.

On the instance we check the initial size of the root file system (8 GB):

$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  913M  6.5G  13% /
New Ubuntu 8.04.3 Hardy AMIs for Amazon EC2

Scott Moser (Canonical) built and released new Ubuntu 8.04.3 LTS Hardy images and AMIs for Amazon EC2. I also published new EBS boot AMIs using the same images. I’ve listed all of the AMI ids on https://alestic.com (pick your region at the top).

These AMIs should work better in the us-west-1 region (apt sources.list) and have updated software packages so upgrades on new instances should be faster.

Southern California Linux Expo - Februrary 19-21, 2010 at the Westin LAX

SCaLE 8x

The 8th Southern California Linux Expo (aka SCaLE 8x) is a community organized, non-profit event. Those words and the incredibly cheap price might lead you to believe that it is not worth going to, but if this is your first time you’ll be amazed by the size, scope, and professionalism of the event with nearly a hundred exhibits and dozens of informative talks.

Even though you’re not paying hundreds of dollars for the conference fee, it’s still worth traveling to if you’re not in Los Angeles. If you are in LA, then you have no excuse.

Just like last year at SCaLE, I will be leading another “Try-It Lab” where we’ll help folks get started with using Amazon EC2 and Ubuntu Linux. More information about preparation will be posted on the SCaLE blog, so be sure to review it before attending if you’re interested in a hands-on, guided, workshop experience with EC2. The lab seats “sold out” quickly last year, so make sure you get in early.

Deal for readers of Alestic.com: When you register for SCaLE, use the code “ERIC” for 50% off of the listed price. If you sign up today, that gives you a full access pass for a ridiculously low $35. Prices may go up as the weekend gets closer.

[Upate 2010-02-16: Link to preparation instructions on SCALE blog]

Public EBS Boot AMIs for Ubuntu on Amazon EC2

If you’ve been following along, you probably know that I have been recommending that folks using EC2 switch to the official Ubuntu AMIs published by Canonical (Hardy or Karmic). I have been building and publishing Ubuntu AMIs since 2007 (including Dapper, Edgy, Feisty, Gutsy, Hardy, Intrepid, Karmic), but the last year my focus on this project has been to transition these responsibilities to Canonical who have more time and resources to support the initiative.

I’m happy to say that I’ve finally followed my own advice. For my personal Amazon EC2 servers (including for the Alestic.com web site) I am using Ubuntu 9.10 Karmic images published for EC2 by Canonical.

While I was making the transition, I also switched to EBS boot AMIs. However, since it sounds like Canonical is not planning to publish EBS boot AMIs until Lucid, I decided to continue in service to the community and make available EBS boot AMIs for running Ubuntu on EC2.

I have published EBS boot AMIs for Ubuntu 9.10 Karmic and Ubuntu 8.04 Hardy, both 32- and 64-bit architectures, in all current EC2 regions, for a total of a dozen new AMIs.

I chose to use the exact Ubuntu images which Canonical built for running Ubuntu on EC2. This means that these EBS boot AMIs work exactly the same as the official Canonical AMIs including ssh to the ubuntu user. Again, even though I’m publishing the EBS boot AMIs for Karmic and Hardy, the contents of the image were built by Canonical.

The EBS boot AMIs are listed on Alestic.com. I have restructured the table to better feature Canonical AMIs, and now you need to pick an EC2 region to see the IDs.

Give the EBS boot AMIs a spin and let me know if you run into any issues.

How to Report Bugs with Ubuntu on Amazon EC2: ubuntu-bug

The official Ubuntu AMIs published by Canonical for EC2 starting in October have proven to be solid and production worthy. However, you may still on occasion run into an issue which deserves to be brought to the attention of the Ubuntu server team developing these AMIs and the software which enables Ubuntu integration with EC2.

The easiest, most efficient, and most complete way to report problems with Ubuntu on EC2 is to use the ubuntu-bug tool which comes pre-installed on all Ubuntu systems.

The ubuntu-bug command requires a single argument which is one of:

  1. the name of an Ubuntu software package experiencing a problem,

  2. the path to a program related to the problem,

  3. the process id of the program experiencing the problem, or

  4. the path of a crash file.

When reporting EC2 startup issues with an Ubuntu instance, the involved package is generally ec2-init so the command to run would be:

ubuntu-bug ec2-init

This command should be run on the EC2 instance that is experiencing the problem. The ubuntu-bug command will collect relevant information about the instance and file it with the bug report to assist in tracking down and correcting the issue.

If the instance with the problem is no longer running or accessible, try to run another instance of the same AMI to report the bug. This will help submit the correct AMI information with the bug report.

If ubuntu-bug reports “This is not a genuine Ubuntu package” you might have to first run

sudo apt-get update

and then try again.

Unfortunately, ubuntu-bug is an interactive program which does not accept command line options to set choices, so you will need to respond to a couple prompts and then copy and paste a URL it provides to you. First, it asks:

What would you like to do? Your options are:
  S: Send report (1.5 KiB)
  V: View report
  K: Keep report file for sending later or copying to somewhere else
  C: Cancel
Please choose (S/V/K/C): S

Respond by hitting the “S” key because you really do want to report a problem.

ubuntu-bug then displays a URL and asks if you would like to launch a browser.

Choices:
  1: Launch a browser now
  C: Cancel
Please choose (1/C): C

Respond by hitting the “C” key as ubuntu-bug running on the EC2 instance can’t launch the web browser on your local system and you probably don’t want to use a terminal based browser.

Make a note of the URL displayed in:

*** To continue, you must visit the following URL:
  https://bugs.launchpad.net/ubuntu/+source/ec2-init/+filebug/LONGSTRINGHERE?

Copy the URL and paste it into your web browser. You will continue reporting the problem through your browser and the system information will be attached after you submit.

If this is the first time you have used Launchpad.net, you will be prompted to create an account. Use a valid email address as you will need to confirm it.

Launchpad will prompt you to enter a “Summary” which should be a short description of the bug. If it is not a duplicate of one of the bugs already entered, click “No I need to report a new bug” and enter the “Further Information”. Include as much information as possible relevant to the issue. If a developer can reproduce the bug using this description, then it will be addressed more easily.

For general information on submitting bugs in Ubuntu, please see:

https://help.ubuntu.com/community/ReportingBugs

You can see also see a current list of open ec2-images bugs.

If you are reporting Ubuntu on EC2 bugs directly using Launchpad without ubuntu-bug (not recommended) make sure you include the AMI id and tag the bug with “ec2-images”.

Note that ubuntu-bug is not a mechanism to support general support questions. One place to get help with running Ubuntu on EC2 is from the community in the ec2ubuntu Google group and there’s always the general Amazon EC2 forum. You can occasionally get live help with Ubuntu on EC2 on the #ubuntu-server IRC channel on irc.freenode.net

Three Ways to Protect EC2 Instances from Accidental Termination and Loss of Data

Here are a few little-publicized benefits that were launched with Amazon EC2’s new EBS boot instances: You can lock them from being accidentally terminated; you can prevent termination even when you try to shutdown the server from inside the instance; and you can automatically save your data storage when they are terminated.

In discussions with other EC2 users, I’ve found that it is a common feeling of near panic when you go to terminate an instance and you check very carefully to make sure that you are deleting the right instance instead of an active production system. Slightly less common but even worse is the feeling of dread when you realize you just casually terminated the wrong EC2 instance.

It is always recommended to set up your AWS architecture so that you are able to restart production systems on EC2 easily, as they could, in theory, be lost at any point due to hardware failure or other actions. However, new features released with the EBS boot instances help reduce the risks associated with human error.

The following examples will demonstrate with the EC2 API command line tools ec2-run-instances, ec2-modify-instance-attribute, and ec2-terminate-instances. Since AWS is Amazon’s “web service” all of these features are also available through the API and should be coming available (if they aren’t already) in other tools, packages, and interfaces using the web service API.

  1. Shutdown Behavior

First, let’s look at what happens when you run a command like the following in an EC2 instance:

sudo shutdown -h now
# or, equivalently and much easier to type:
sudo halt

Using the legacy S3 based AMIs, either of the above terminates the instance and you lose all local and ephemeral storage (boot disk and /mnt) forever. Hope you remembered to save the important stuff elsewhere!

A shutdown from within an EBS boot instance, by default, will initiate a “stop” instead of a “terminate”. This means that your instance is not in a running state and not getting charged, but the EBS volumes still exist and you can “start” the same instance id again later, losing nothing.

You can explicitly set or change this with the --instance-initiated-shutdown-behavior option in ec2-run-instances. For example:

ec2-run-instances \
  --instance-initiated-shutdown-behavior stop \
  [...]

This is the first safety precaution and, as mentioned, should already be built in, though it doesn’t hurt to document your intentions with an explicit option.

  1. Delete on Termination

Though EBS volumes created and attached to an instance at instantiation are preserved through a “stop”/“start” cycle, by default they are destroyed and lost when an EC2 instance is terminated. This behavior can be changed with a delete-on-termination boolean value buried in the documentation for the --block-device-mapping option of ec2-run-instances.

Here is an example that says “Don’t delete the root EBS volume when this instance is terminated”:

ec2-run-instances \
  --block-device-mapping /dev/sda1=::false \
  [...]

If you are associating other EBS snapshots with the instance at run time, you can also specify that those created EBS volumes should be preserved past the lifetime of the instance:

  --block-device-mapping /dev/sdh=SNAPSHOTID::false

When you use these options, you’ll need to manually clean up the EBS volume(s) if you no longer want to pay for the storage costs after an instance is gone.

Note: EBS volumes attached to an instance after it is already running are, by default, left alone on termination (i.e., not deleted). The default rules are: If the EBS volume is created by the creation of the instance, then the termination of the instance deletes the volumes. If you created the volume explicitly, then you must delete it explicitly.

  1. Disable Termination

This is my favorite new safety feature. Years ago, I asked Amazon for the ability to lock an EC2 instance from being accidentally terminated, and with the launch of EBS boot instances, this is now possible. Using ec2-run-instances, the key option is:

ec2-run-instances \
  --disable-api-termination \
  [...]

Now, if you try to terminate the instance, you will get an error like:

Client.OperationNotPermitted: The instance 'i-XXXXXXXX' may not be terminated.
Modify its 'disableApiTermination' instance attribute and try again.

Yay!

To unlock the instance and allow termination through the API, use a command like:

ec2-modify-instance-attribute \
  --disable-api-termination false \
  INSTANCEID

Then end it all with:

ec2-terminate-instances INSTANCEID

Oh, wait! did you make sure you unlocked and terminated the right instance?! :)

Note: disableApiTermination is also available when you run S3 based AMIs today, but since the instance can still be terminated from inside (shutdown/halt) I am moving towards EBS based instances for all around security.

Put It Together

Here’s a command line I just used to start up an EC2 instance of an Ubuntu 9.10 Karmic EBS boot AMI which I intend to use as a long-term production server with strong uptime and data safety requirements. I wanted to add all the protection available:

ec2-run-instances \
  --key $keypair \
  --availability-zone $availabilityzone \
  --user-data-file $startupscript \
  --block-device-mapping /dev/sda1=::false \
  --block-device-mapping /dev/sdh=$snapshotid::false \
  --instance-initiated-shutdown-behavior stop \
  --disable-api-termination \
  $amiid

With regular EBS snapshots of the volumes, copies to off site backups, and documented/automated restart processes, I feel pretty safe.

Runtime Modification

If you have a valuable running EC2 instance, but forgot to specify the above options to protect it when you started it, or you are now ready to turn a test system into a production system, you can still lock things after the fact using the ec2-modify-instance-attribute command (or equivalent API call).

For example:

ec2-modify-instance-attribute --disable-api-termination true INSTANCEID
ec2-modify-instance-attribute --instance-initiated-shutdown-behavior stop INSTANCEID
ec2-modify-instance-attribute --block-device-mapping /dev/sda1=:false INSTANCEID
ec2-modify-instance-attribute --block-device-mapping /dev/sdh=:false INSTANCEID

Notes:

  • Only one type of option can be specified with each invocation.

  • The --disable-api-termination option has no argument value when used in in ec2-run-instances, but it takes a value of true or false in ec2-modify-instance-attribute.

  • You don’t specify the snapshot id when changing the delete-on-terminate state of an attached EBS volume.

  • You may change the delete-on-terminate to “true” for an EBS volume you created and attached after the instance was running. By default it will not be deleted since you created it.

  • The above --block-device-mapping option requires recent ec2-api-tools to avoid a bug.

You can find out the current state of the options using ec2-describe-instance-attribute, which only takes one option at a time:

ec2-describe-instance-attribute --disable-api-termination INSTANCEID
ec2-describe-instance-attribute --instance-initiated-shutdown-behavior INSTANCEID
ec2-describe-instance-attribute --block-device-mapping INSTANCEID

Unfortunately, the block-device-mapping output does not currently show the state of delete-on-termination value, but thanks to Andrew Lusk for pointing out in a comment below that it is available through the API. Here’s a hack which extracts the information from the debug output:

ec2-describe-instance-attribute -v --block-device-mapping INSTANCEID | 
  perl -0777ne 'print "$1\t$2\t$3\n" while 
  m%<deviceName>(.*?)<.*?<volumeId>(.*?)<.*?<deleteOnTermination>(.*?)<%sg'

While we’re on the topic of EC2 safety, I should mention the well known best practice of separating development and production systems by using a different AWS account for each. Amazon lets you create and use as many accounts as you’d like even with the same credit card as long as you use a unique email address for each. Now that you can share EBS snapshots between accounts, this practice is even more useful.

What additional safety precautions do you take with your EC2 instances and data?

[Update 2011-09-13: Corrected syntax for modifying block-device-mapping on running instance (only one “:")]

Building EBS Boot AMIs Using Canonical's Downloadable EC2 Images

In the last article, I described how to use the vmbuilder software to build an EBS boot AMI from scratch for running Ubuntu on EC2 with a persistent root disk.

In the ec2ubuntu Google group, Scott Moser pointed out that users can take advantage of the Ubuntu images for EC2 that Canonical has already built with vmbuilder. This can simplify and speed up the process of building EBS boot AMIs for the rest of us.

Let’s walk through the steps, creating an EBS boot AMI for Ubuntu 9.10 Karmic.

  1. Run an instance of the Ubuntu 9.10 Karmic AMI, either 32-bit or 64-bit depending on which architecture AMI you wish to build. Make a note of the resulting instance id:

     # 32-bit
     instanceid=$(ec2-run-instances   \
       --key YOURKEYPAIR              \
       --availability-zone us-east-1a \
       ami-1515f67c |
       egrep ^INSTANCE | cut -f2)
     echo "instanceid=$instanceid"
    
     # 64-bit
     instanceid=$(ec2-run-instances   \
       --key YOURKEYPAIR              \
       --availability-zone us-east-1a \
       --instance-type m1.large       \
       ami-ab15f6c2 |
       egrep ^INSTANCE | cut -f2)
     echo "instanceid=$instanceid"
    

    Wait for the instance to move to the “running” state, then note the public hostname:

     while host=$(ec2-describe-instances "$instanceid" | 
       egrep ^INSTANCE | cut -f4) && test -z $host; do echo -n .; sleep 1; done
     echo host=$host
    

    Copy your X.509 certificate and private key to the instance. Use the correct locations for your credential files:

     rsync                            \
       --rsh="ssh -i YOURKEYPAIR.pem" \
       --rsync-path="sudo rsync"      \
       ~/.ec2/{cert,pk}-*.pem         \
       ubuntu@$host:/mnt/
    

    Connect to the instance:

     ssh -i YOURKEYPAIR.pem ubuntu@$host
    
  2. Install EC2 API tools from the Ubuntu on EC2 ec2-tools PPA because they are more up to date than the ones in Karmic, letting us register EBS boot AMIs:

     export DEBIAN_FRONTEND=noninteractive
     echo "deb http://ppa.launchpad.net/ubuntu-on-ec2/ec2-tools/ubuntu karmic main" |
       sudo tee /etc/apt/sources.list.d/ubuntu-on-ec2-ec2-tools.list &&
     sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9EE6D873 &&
     sudo apt-get update &&
     sudo -E apt-get dist-upgrade -y &&
     sudo -E apt-get install -y ec2-api-tools
    
  3. Set up some parameters:

     codename=karmic
     release=9.10
     tag=server
     if [ $(uname -m) = 'x86_64' ]; then
       arch=x86_64
       arch2=amd64
       ebsopts="--kernel=aki-fd15f694 --ramdisk=ari-c515f6ac"
       ebsopts="$ebsopts --block-device-mapping /dev/sdb=ephemeral0"
     else
       arch=i386
       arch2=i386
       ebsopts="--kernel=aki-5f15f636 --ramdisk=ari-0915f660"
       ebsopts="$ebsopts --block-device-mapping /dev/sda2=ephemeral0"
     fi
    
  4. Download and unpack the latest released Ubuntu server image file. This contains the output of vmbuilder as run by Canonical.

     imagesource=http://uec-images.ubuntu.com/releases/$codename/release/unpacked/ubuntu-$release-$tag-uec-$arch2.img.tar.gz
     image=/mnt/$codename-$tag-uec-$arch2.img
     imagedir=/mnt/$codename-uec-$arch2
     wget -O- $imagesource |
       sudo tar xzf - -C /mnt
     sudo mkdir -p $imagedir
     sudo mount -o loop $image $imagedir
    
  5. [OPTIONAL] At this point /mnt/$image contains a mounted filesystem with the complete Ubuntu image as released by Canonical. You can skip this step if you just want an EBS boot AMI which is an exact copy of the released S3 based Ubuntu AMI from Canonical, or you can make any updates, installations, and customizations you’d like to have in your resulting AMI.

    In this example, we’ll perform similar steps as the previous tutorial and update the software packages to the latest releases from Ubuntu. Remember that the released EC2 image could be months old.

     # Allow network access from chroot environment
     sudo cp /etc/resolv.conf $imagedir/etc/
     # Fix what I consider to be a bug in vmbuilder
     sudo rm -f $imagedir/etc/hostname
     # Add multiverse
     sudo perl -pi -e 's%(universe)$%$1 multiverse%' \
       $imagedir/etc/ec2-init/templates/sources.list.tmpl
     # Add Alestic PPA for runurl package (handy in user-data scripts)
     echo "deb http://ppa.launchpad.net/alestic/ppa/ubuntu karmic main" |
       sudo tee $imagedir/etc/apt/sources.list.d/alestic-ppa.list
     sudo chroot $imagedir \
       apt-key adv --keyserver keyserver.ubuntu.com --recv-keys BE09C571
     # Add ubuntu-on-ec2/ec2-tools PPA for updated ec2-ami-tools
     echo "deb http://ppa.launchpad.net/ubuntu-on-ec2/ec2-tools/ubuntu karmic main" |
       sudo tee $imagedir/etc/apt/sources.list.d/ubuntu-on-ec2-ec2-tools.list
     sudo chroot $imagedir \
       apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9EE6D873
     # Upgrade the system and install packages
     sudo chroot $imagedir mount -t proc none /proc
     sudo chroot $imagedir mount -t devpts none /dev/pts
     cat <<EOF > $imagedir/usr/sbin/policy-rc.d
     #!/bin/sh
     exit 101
     EOF
     chmod 755 $imagedir/usr/sbin/policy-rc.d
     DEBIAN_FRONTEND=noninteractive
     sudo chroot $imagedir apt-get update &&
     sudo -E chroot $imagedir apt-get dist-upgrade -y &&
     sudo -E chroot $imagedir apt-get install -y runurl ec2-ami-tools
     sudo chroot $imagedir umount /proc
     sudo chroot $imagedir umount /dev/pts
     rm -f $imagedir/usr/sbin/policy-rc.d
    

    Again, the above step is completely optional and can be skipped to create the EBS boot AMI that Canonical would have published.

  6. Copy the image files to a new EBS volume, snapshot it, and register the snapshot as an EBS boot AMI. Make a note of the resulting AMI id:

     size=15 # root disk in GB
     now=$(date +%Y%m%d-%H%M)
     prefix=ubuntu-$release-$codename-$tag-$arch-$now
     description="Ubuntu $release $codename $tag $arch $now"
     export EC2_CERT=$(echo /mnt/cert-*.pem)
     export EC2_PRIVATE_KEY=$(echo /mnt/pk-*.pem)
     volumeid=$(ec2-create-volume --size $size --availability-zone us-east-1a |
       cut -f2)
     instanceid=$(wget -qO- http://instance-data/latest/meta-data/instance-id)
     ec2-attach-volume --device /dev/sdi --instance "$instanceid" "$volumeid"
     while [ ! -e /dev/sdi ]; do echo -n .; sleep 1; done
     sudo mkfs.ext3 -F /dev/sdi
     ebsimage=$imagedir-ebs
     sudo mkdir $ebsimage
     sudo mount /dev/sdi $ebsimage
     sudo tar -cSf - -C $imagedir . | sudo tar xvf - -C $ebsimage
     sudo umount $ebsimage
     ec2-detach-volume "$volumeid"
     snapshotid=$(ec2-create-snapshot "$volumeid" | cut -f2)
     ec2-delete-volume "$volumeid"
     while ec2-describe-snapshots "$snapshotid" | grep -q pending
       do echo -n .; sleep 1; done
     ec2-register                   \
       --architecture $arch         \
       --name "$prefix"             \
       --description "$description" \
       $ebsopts                     \
       --snapshot "$snapshotid"
    
  7. Depending on what you want to keep from the above process, there are various things that you might want to clean up.

    If you no longer want to use an EBS boot AMI:

     ec2-deregister $amiid
     ec2-delete-snapshot $snapshotid
    

    When you’re done with the original instance:

     ec2-terminate-instance $instanceid
    

In this example, I set /mnt to the first ephemeral store on the instance even on EBS boot AMIs. This more closely matches the default on the S3 based AMIs, but means that /mnt will not be persistent across a stop/start of an EBS boot instance. If Canonical starts publishing EBS boot AMIs, they may or may not choose to make the same choice.

Community feedback, bug reports, and enhancements for these instructions are welcomed.

[Update 2009-01-14: Wrapped upgrade/installs inside of /usr/sbin/policy-rc.d setting to avoid starting daemons in chroot environment.]

[Update 2010-01-22: New location for downloadable Ubuntu images.]

[Update 2010-03-26: Path tweak, thanks to paul.]

Building EBS Boot and S3 Based AMIs for EC2 with Ubuntu vmbuilder

Here’s my current recipe for how to build an Ubuntu 9.10 Karmic AMI, either the new EBS boot or the standard S3 based, using the Ubuntu vmbuilder software. The Ubuntu vmbuilder utility replaces ec2ubuntu-build-ami for building EC2 images and it can build images for a number of other virtual machine formats as well.

There is a lot of room for simplification and scripting in the following instructions, but I figured I’d publish what is working now so others can take advantage of the research to date. Happy New Year!

Some sections are marked [For EBS boot AMI] or [For S3 based AMI] and should only be followed when you are building that type of AMI. The rest of the sections apply to either type. It is possible to follow all instructions to build both types of AMIs at the same time.

  1. Run an instance of Ubuntu 9.10 Karmic AMI, either 32-bit or 64-bit depending on which architecture AMI you wish to build. I prefer the c1.* instance types to speed up the builds, but you can get by cheaper with the m1.* instance types. Make a note of the resulting instance id:

     # 32-bit
     instanceid=$(ec2-run-instances   \
       --key YOURKEYPAIR              \
       --availability-zone us-east-1a \
       --instance-type c1.medium      \
       ami-1515f67c |
       egrep ^INSTANCE | cut -f2)
     echo "instanceid=$instanceid"
    
     # 64-bit
     instanceid=$(ec2-run-instances   \
       --key YOURKEYPAIR              \
       --availability-zone us-east-1a \
       --instance-type c1.xlarge      \
       ami-ab15f6c2 |
       egrep ^INSTANCE | cut -f2)
     echo "instanceid=$instanceid"
    

    Wait for the instance to move to the “running” state, then note the public hostname:

     while host=$(ec2-describe-instances "$instanceid" | 
       egrep ^INSTANCE | cut -f4) && test -z $host; do echo -n .; sleep 1; done
     echo host=$host
    
  2. Copy your X.509 certificate and private key to the instance. Use the correct locations for your credential files:

     rsync                            \
       --rsh="ssh -i YOURKEYPAIR.pem" \
       --rsync-path="sudo rsync"      \
       ~/.ec2/{cert,pk}-*.pem         \
       ubuntu@$host:/mnt/
    
  3. Connect to the instance:

     ssh -i YOURKEYPAIR.pem ubuntu@$host
    
  4. Install the image building software. We install the python-vm-builder package from Karmic, but we’re going to be using the latest vmbuilder from the development branch in Launchpad because it has good bug fixes. We also use the EC2 API tools from the Ubuntu on EC2 ec2-tools PPA because they are more up to date than the ones in Karmic, letting us register EBS boot AMIs:

     export DEBIAN_FRONTEND=noninteractive
     echo "deb http://ppa.launchpad.net/ubuntu-on-ec2/ec2-tools/ubuntu karmic main" |
       sudo tee /etc/apt/sources.list.d/ubuntu-on-ec2-ec2-tools.list &&
     sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9EE6D873 &&
     sudo apt-get update &&
     sudo -E apt-get upgrade -y &&
     sudo -E apt-get install -y \
       python-vm-builder ec2-ami-tools ec2-api-tools bzr &&
     bzr branch lp:vmbuilder
    

    You can ignore the “Launchpad ID” warning from bzr.

  5. Fill in your AWS credentials:

     export AWS_USER_ID=...
     export AWS_ACCESS_KEY_ID=...
     export AWS_SECRET_ACCESS_KEY=...
     export EC2_CERT=$(echo /mnt/cert-*.pem)
     export EC2_PRIVATE_KEY=$(echo /mnt/pk-*.pem)
    

    Set up parameters and create files to be used by the build process. The bucket value is only required for S3 based AMIs:

     bucket=...
     codename=karmic
     release=9.10
     tag=server
     if [ $(uname -m) = 'x86_64' ]; then
       arch=x86_64
       arch2=amd64
       pkgopts="--addpkg=libc6-i386"
       kernelopts="--ec2-kernel=aki-fd15f694 --ec2-ramdisk=ari-c515f6ac"
       ebsopts="--kernel=aki-fd15f694 --ramdisk=ari-c515f6ac"
       ebsopts="$ebsopts --block-device-mapping /dev/sdb=ephemeral0"
     else
       arch=i386
       arch2=i386
       pkgopts=
       kernelopts="--ec2-kernel=aki-5f15f636 --ec2-ramdisk=ari-0915f660"
       ebsopts="--kernel=aki-5f15f636 --ramdisk=ari-0915f660"
       ebsopts="$ebsopts --block-device-mapping /dev/sda2=ephemeral0"
     fi
     cat > part-i386.txt <<EOM
     root 10240 a1
     /mnt 1 a2
     swap 1024 a3
     EOM
     cat > part-x86_64.txt <<EOM
     root 10240 a1
     /mnt 1 b
     EOM
    
  6. Create a script to perform local customizations to the image before it is bundled. This is passed to vmbuilder below using the --execscript option:

     cat > setup-server <<'EOM'
     #!/bin/bash -ex
     imagedir=$1
     # fix what I consider to be bugs in vmbuilder
     perl -pi -e "s%^127.0.1.1.*\n%%" $imagedir/etc/hosts
     rm -f $imagedir/etc/hostname
     # Use multiverse
     perl -pi -e 's%(universe)$%$1 multiverse%' \
       $imagedir/etc/ec2-init/templates/sources.list.tmpl
     # Add Alestic PPA for runurl package (handy in user-data scripts)
     echo "deb http://ppa.launchpad.net/alestic/ppa/ubuntu karmic main" |
       tee $imagedir/etc/apt/sources.list.d/alestic-ppa.list
     chroot $imagedir \
       apt-key adv --keyserver keyserver.ubuntu.com --recv-keys BE09C571
     # Add ubuntu-on-ec2/ec2-tools PPA for updated ec2-ami-tools
     echo "deb http://ppa.launchpad.net/ubuntu-on-ec2/ec2-tools/ubuntu karmic main" |
       sudo tee $imagedir/etc/apt/sources.list.d/ubuntu-on-ec2-ec2-tools.list
     chroot $imagedir \
       sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9EE6D873
     # Install packages
     chroot $imagedir apt-get update
     chroot $imagedir apt-get install -y runurl
     chroot $imagedir apt-get install -y ec2-ami-tools
     EOM
     chmod 755 setup-server
    
  7. Build the image:

     now=$(date +%Y%m%d-%H%M)
     dest=/mnt/dest-$codename-$now
     prefix=ubuntu-$release-$codename-$arch-$tag-$now
     description="Ubuntu $release $codename $arch $tag $now"
     sudo vmbuilder/vmbuilder xen ubuntu       \
       --suite=$codename                       \
       --arch=$arch2                           \
       --dest=$dest                            \
       --tmp=/mnt                              \
       --ec2                                   \
       --ec2-version="$description"            \
       --manifest=$prefix.manifest             \
       --lock-user                             \
       --part=part-$arch.txt                   \
       $kernelopts                             \
       $pkgopts                                \
       --execscript ./setup-server             \
       --debug
    

    [For S3 based AMI] include the following options in the vmbuilder command above. This does not preclude you from also building an EBS boot AMI with the same image. Make a note of the resulting AMI id output by vmbuilder:

       --ec2-bundle                            \
       --ec2-upload                            \
       --ec2-register                          \
       --ec2-bucket=$bucket                    \
       --ec2-prefix=$prefix                    \
       --ec2-user=$AWS_USER_ID                 \
       --ec2-cert=$EC2_CERT                    \
       --ec2-key=$EC2_PRIVATE_KEY              \
       --ec2-access-key=$AWS_ACCESS_KEY_ID     \
       --ec2-secret-key=$AWS_SECRET_ACCESS_KEY \
    
  8. [For EBS boot AMI] Copy the image files to a new EBS volume, snapshot it, and register the snapshot as an EBS boot AMI. Make a note of the resulting AMI id:

     size=15 # root disk in GB
     volumeid=$(ec2-create-volume --size $size --availability-zone us-east-1a |
       cut -f2)
     instanceid=$(wget -qO- http://instance-data/latest/meta-data/instance-id)
     ec2-attach-volume --device /dev/sdi --instance "$instanceid" "$volumeid"
     while [ ! -e /dev/sdi ]; do echo -n .; sleep 1; done
     sudo mkfs.ext3 -F /dev/sdi
     ebsimage=$dest/ebs
     sudo mkdir $ebsimage
     sudo mount /dev/sdi $ebsimage
     imageroot=$dest/root
     sudo mkdir $imageroot
     sudo mount -oloop $dest/root.img $imageroot
     sudo tar -cSf - -C $imageroot . | sudo tar xvf - -C $ebsimage
     sudo umount $imageroot $ebsimage
     ec2-detach-volume "$volumeid"
     snapshotid=$(ec2-create-snapshot "$volumeid" | cut -f2)
     ec2-delete-volume "$volumeid"
     while ec2-describe-snapshots "$snapshotid" | grep -q pending
       do echo -n .; sleep 1; done
     ec2-register                   \
       --architecture $arch         \
       --name "$prefix"             \
       --description "$description" \
       $ebsopts                     \
       --snapshot "$snapshotid"
    
  9. Depending on what you want to keep from the above process, there are various things that you might want to clean up.

    If you no longer want to use an S3 based AMI:

     ec2-deregister $amiid
     ec2-delete-bundle                     \
       --access-key $AWS_ACCESS_KEY_ID     \
       --secret-key $AWS_SECRET_ACCESS_KEY \
       --bucket $bucket                    \
       --prefix $prefix
    

    If you no longer want to use an EBS boot AMI:

     ec2-deregister $amiid
     ec2-delete-snapshot $snapshotid
    

    When you’re done with the original instance:

     ec2-terminate-instance $instanceid
    

In the above instructions I stray a bit from the defaults. For example, I add the runurl package from the Alestic PPA so that it is available for use in user-data scripts on first boot. I enable multiverse for easy access to more software, and I install ec2-ami-tools which works better for me than the current euca2ools.

I also set /mnt to the first ephemeral store on the instance even on EBS boot AMIs. This more closely matches the default on the S3 based AMIs, but means that /mnt will not be persistent across a stop/start of an EBS boot instance.

Explore and set options as you see fit for your applications. Go wild with the --execscript feature (similar to the ec2ubuntu-build-ami --script option) to customize your image.

The following vmbuilder options do not currently work with creating EC2 images: --mirror, --components, --ppa. I have submitted bug 502490 to track this.

As with much of my work here, I’m simply explaining how to use software that others have spent a lot of energy building. In this case a lot of thanks go to the Ubuntu server team for developing vmbuilder, the EC2 plugin, the ec2-init startup software, and the code which builds the official Ubuntu AMIs; especially Søren Hansen, Scott Moser, and Chuck Short. I also appreciate the folks who reviewed early copies of these instructions and provided feedback including Scott Moser, Art Zemon, Trifon Trifonov, Vaibhav Puranik, and Chris.

Community feedback, bug reports, and enhancements for these instructions are welcomed.

Call for testers (building EBS boot AMIs with Ubuntu vmbuilder)

I’m polishing up an article about how to build images from scratch with Ubuntu vmbuilder, both for S3 based AMIs and for EBS boot AMIs. Since nobody is paying any attention this new year’s weekend (except for you seven and you know who you are) I figured I’d wait until Monday or so to publish the article.

However, if you have nothing better to do this long weekend and you’d like a preview copy of the article, drop me a note with your email address. All I ask for in return is that you review (and perhaps even test) the instructions and send me feedback where things aren’t clear or don’t work.