New Options In ec2-expire-snapshots v0.11

The ec2-expire-snapshots program can be used to expire EBS snapshots in Amazon EC2 on a regular schedule that you define. It can be used as a companion to ec2-consistent-snapshot or independently.

There have been two recent submissions to the code from the community that provide new command line options in the latest version (v0.11) of ec2-expire-snapshots.

  1. Wayne Robinson discovered that EC2 sometimes limits the rate at which you can delete snapshots, and submitted code for a new --delete-delay option that tells ec2-expire-snapshots to pause for N seconds between each EBS snapshot deletion.

  2. Anthony Tonns uses EC2’s new feature to copy EBS snapshots from one region to another for redundancy, and found that Amazon does not associate snapshots from the same EBS volume in the source region with the same source volume in the target region. Anthony came up with the idea of putting the source volume id in a tag and submitted code for a new --volume-id-in-tag option that lets you specify the tag name.

Thanks also to varunwy for submitting a patch a while back to clean up some dependencies in the package installation.

On Ubuntu, you can install ec2-expire-snapshots from the Alestic PPA using:

Convert Running EC2 Instance to EBS-Optimized Instance with Provisioned IOPS EBS Volumes

Amazon just announced two related features for getting super-fast, consistent performance with EBS volumes: (1) Provisioned IOPS EBS volumes, and (2) EBS-Optimized Instances.

Starting new instances and EBS volumes with these features is fine, but what if you already have some running instances you’d like to upgrade for faster and more consistent disk performance?

Given the two AWS features, there are two separate powers that need to be engaged to take full advantage:

  1. Convert the EBS volume(s) from standard EBS volumes into new Provisioned IOPS EBS volume(s).

  2. Convert the standard EC2 instance into an EBS-Optimized instance.

This article demonstrates how to take an existing EBS boot instance that is already running and convert it to use both of these two EBS performance features. Note that there will be some increased costs; please study Amazon’s published pricing before attempting.

ec2-consistent-snapshot on GitHub and v0.43 Released

The source for ec2-conssitent-snapshot has historically been available here:

ec2-consistent-snapshot on Launchpad.net using Bazaar

For your convenience, it is now also available here:

ec2-consistent-snapshot on GitHub using Git

You are welcome to fork ec2-consistent snapshot under the liberal terms of the Apache License, Version 2.0.

I welcome patch submissions, especially if:

You Should Use EBS Boot Instances on Amazon EC2

EBS boot vs. instance-store

If you are just getting started with Amazon EC2, then use EBS boot instances and stop reading this article. Forget that you ever heard about instance-store and accept my apology that I just mentioned it. Once you are completely comfortable with using EBS boot instances on EC2, you may (or may not) want to come back here and read why you made a good decision.

EC2 experts may find that there are specific cases, few and far between, where instance-store might make sense, but they don’t attempt to use instance-store without understanding and accounting for all the serious drawbacks and dangers that go with making this choice. For example, experts using instance-store don’t mind losing all of the data on the instance as they have designed the system so that the data is stored elsewhere and so that a new instance can easily and automatically be rebuilt from scratch.

One of the challenges for beginners is that many of the benefits of EBS boot don’t necessarily seem like something you’ll need to use right away. Then they get down the road and into situations where they realize that they would have been much better off if they had gone with EBS boot in the first place and may find it takes some work to make the transition.

Big benefits of EBS boot instances

Here are some of the reasons I use and recommend EBS boot instances. None of these benefits are available with instance-store, so even a single one of these can be an overriding factor for choosing EBS boot.

Creating Public AMIs Securely for EC2

Amazon published a tutorial about best practices in creating public AMIs for use on EC2 last week:

How To Share and Use Public AMIs in A Secure Manner

Though the general principles put forth in the tutorial are good, some of the specifics are flawed in how to accomplish those principles. (Comments here relate to the article update from June 7, 2011 3:45 AM GMT.)

The primary message of the article is that you should not publish private information on a public AMI. Excellent advice!

Unfortunately, the article seems to recommend or at least to assume that you are building the public AMI by taking a snapshot of a running instance. Though this method seems an easy way to build an AMI and is fine for private AMIs, it is is a dangerous approach for public AMIs because of how difficult it is to identify private information and to clear that private information from a running system in such a way that it does not leak into the public AMI.

Fixing Files on the Root EBS Volume of an EC2 Instance

You can examine and edit files on the root EBS volume on an EC2 instance even if you are in what you considered a disastrous situation like:

  • You lost your ssh key or forgot your password

  • You made a mistake editing the /etc/sudoers file and can no longer gain root access with sudo to fix it

  • Your long running instance is hung for some reason, cannot be contacted, and fails to boot properly

  • You need to recover files off of the instance but cannot get to it

On a physical computer sitting at your desk, you could simply boot the system with a CD or USB stick, mount the hard drive, check out and fix the files, then reboot the computer to be back in business.

A remote EC2 instance, however, seems distant and inaccessible when you are in one of these situations. Fortunately, AWS provides us with the power and flexibility to be able to recover a system like this, provided that we are running EBS boot instances and not instance-store.

The approach on EC2 is somewhat similar to the physical solution, but we’re going to move and mount the faulty “hard drive” (root EBS volume) to a different instance, fix it, then move it back.

In some situations, it might simply be easier to start a new EC2 instance and throw away the bad one, but if you really want to fix your files, here is the approach that has worked for many:

New Release of ec2-consistent-snapshot and Screencast by Ahmed Kamal

ec2-consistent-snapshot is a tool that uses the Amazon EC2 API to initiate a snapshot of an EBS volume with some additional work to help ensure that an XFS file system and/or MySQL database are in a consistent state on that snapshot.

Ahmed Kamal pointed out to me yesterday that we can save lots of trouble installing ec2-consistent-snapshot by adding a dependency on the new libnet-amazon-ec2-perl package in Ubuntu instead of forcing people to install the Net::Amazon::EC2 Perl package through CPAN (not easy for the uninitiated).

I released a new version of ec2-consistent-snapshot which has this new dependency and updated documentation. Installing this software on Ubuntu 10.04 Lucid, 10.10 Maverick, and the upcoming Natty release is now as easy as:

A Simpler Way To Replace Instance Hardware on EC2

A while back I wrote an article describing a way to move the root EBS volume from one running instance to another. I pitched this as a way to replace the hardware for your instance in the event of failures.

Since then, I have come to the realization that there is a much simpler method to move your instance to new hardware, and I have been using this new method for months when I run into issues that I suspect might be attributed to underlying hardware issues.

This method is so simple, that I am almost embarrassed about having written the previous article, but I’ll point out below at least one benefit that still exists with the more complicated approach.

I now use this process as the second step–after a simple reboot–when I am experiencing odd problems like not being able to connect to a long running EC2 instance. (The zeroth step is to start running and setting up a replacement instance in the event that steps one and two do not produce the desired results.)

Here goes…

ec2-consistent-snapshot: New release 0.35

ec2-consistent-snapshot version 0.35 has been released on the Alestic PPA. This software is a wrapper around the EBS create-snapshot API call and can be used to help ensure that the file system and any MySQL database on the EBS volume are in a consistent state, suitable for restoring at a later time.

The most important change in this release is a fix for a defect that has been nagging many folks for months. In rare situations, the create-snapshot API call itself took longer than 10 seconds to return from the EC2 web service at Amazon. The software did not trap the alarm correctly and exited without unfreezing the XFS file system which forced us to add an awkward unfreeze command in all cron jobs.

Improving Security on EC2 With AWS Identity and Access Management (IAM)

A few hours ago, Amazon launched a public preview of AWS Identity and Access Management (IAM) which is a powerful feature if you have a number of developers who need to access and to manage resources for an AWS account. A unique IAM user can be created for each developer and specific permissions can be doled out as needed.

You can also create IAM users for system functions, dramatically increasing the security of your AWS account in the event a server is compromised. That benefit is the focus of this article using an example frequently cited by EC2 users: Automating EBS snapshots on a local EC2 instance without putting the keys to your AWS kingdom on the file system.

Before the release of AWS IAM, if you wanted to create EBS snapshots in a local cron job on an EC2 instance, you needed to put the master AWS credentials in the file system on that instance. If those AWS credentials were compromised, the attacker could perform all sorts of havoc with resources in your AWS account and charges to your credit card.

With the launch of AWS IAM, we can create a system IAM user with its own AWS keys and all it is allowed to do is… create EBS snapshots! These keys are placed on the instance and used in the snapshot cron job. Now, an attacker can do very little damage with those keys if they are compromised, and we all feel much safer.

The AWS IAM documentation is required reading and a great reference. This article is only intended to serve as a practical introduction to one simple application of IAM.

These instructions assume you are running Ubuntu 10.04 (Lucid) on both your local system and on Amazon EC2. Adjust as appropriate for other distributions and releases.

IAM Installation

Ubuntu does not yet have an official software package for AWS IAM, so we need to download the IAM command line toolkit from Amazon. This can be done on any machine including your local desktop. The IAM command line tools require Java so we need to make sure that is installed as well.

Eventually, you’ll want to install this software somewhere more permanent, but for this demo, we’ll just use it from a subdirectory.

sudo apt-get install openjdk-6-jre unzip
export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
wget http://awsiammedia.s3.amazonaws.com/public/tools/cli/latest/IAMCli.zip
unzip IAMCli.zip
export AWS_IAM_HOME=$(echo $(pwd)/IAMCli-*)
export PATH=$PATH:$AWS_IAM_HOME/bin

The AWS IAM tools require you to save your AWS account’s main access key id and AWS secret access key in yet another file format. Create this AWS credential file as, say, $HOME/.aws-credentials-master.txt in the following format (replacing the values with your own credentials):

AWSAccessKeyId=YOURACCESSKEYIDHERE
AWSSecretKey=YOURSECRETKEYHERE

Note: The above is the sample content of a file you are creating, and not shell commands to run.

Protect the above file and set an environment variable to tell IAM where to find it:

export AWS_CREDENTIAL_FILE=$HOME/.aws-credentials-master.txt
chmod 600 $AWS_CREDENTIAL_FILE

We can now use the iam-* command line tools to create and manage AWS IAM users, groups, and policies.

Create IAM User

How you manage your users and groups is sure to be a personal preference that is fine tuned over time, but for the purposes of this demo, I’ll propose that for tracking purposes we put non-human users into a new group named “system”.

iam-groupcreate -g system

Create the snapshotter system user, saving the keys to a file:

user=snapshotter
iam-usercreate -u $user -g system -k |
  tee $HOME/.aws-keys-$user.txt
chmod 600 $HOME/.aws-keys-$user.txt

You will want to have this snapshotter keys file on the EC2 instance, so copy it there:

rsync -Paz $HOME/.aws-keys-$user.txt REMOTEUSER@REMOTESYSTEM:

Allow IAM user snapshotter to create EBS snapshots of any EBS volume:

iam-useraddpolicy \
  -p allow-create-snapshot \
  -e Allow \
  -u $user \
  -a ec2:CreateSnapshot \
  -r '*'

There’s a lot of preparatory and other commands in this article, but take a second to focus on the fact that the core, functional steps are simply the iam-usercreate and iam-useraddpolicy commands above. Two commands and you have a new AWS IAM user with restricted access to your AWS account.

Create EBS Snapshot

For the purposes of this demo, we’ll assume you’re using the ec2-consistent-snapshot tool to create EBS snapshots with a consistent file system and perhaps a consistent MySQL database. (If you’re not using this tool, then you could have simply used ec2-create-snapshot from any computer without having to go through the trouble of creating a new IAM user.)

Make sure you have the latest ec2-consistent-snapshot software installed on the EC2 instance:

sudo add-apt-repository ppa:alestic/ppa
sudo apt-get install ec2-consistent-snapshot

Create the snapshot on the EC2 instance. Adjust options to fit your local EBS volume mount points and MySQL database setup.

sudo ec2-consistent-snapshot \
  --aws-credentials-file $HOME/.aws-keys-snapshotter.txt \
  --xfs-filesystem /YOURMOUNTPOINT \
  YOURVOLUMEID

Follow similar steps to create users and set policies for other system activities you perform on your EC2 instances. IAM can control access to many different AWS resource types, API calls, specific resources, and has even more fine tuned control parameters including time-based restrictions.

The release of AWS Identity and Access Management alleviates one of the biggest concerns security-conscious folks used to have when they started using AWS with a single key that gave complete access and control over all resources. Now the control is entirely in your hands.

Cleanup

If you have followed the steps in this demo and you wish to undo most of what was done, here are some steps for reference.

Delete the IAM user and the IAM group:

iam-userdel -u $user -r
iam-groupdel -g system

Wipe the credentials and keys files and remove the downloaded and unzipped IAM command line toolkit:

sudo apt-get install wipe
wipe  $HOME/.aws-credentials-master.txt \
      $HOME/.aws-keys-$user.txt
rm    IAMCli.zip
rm -r $AWS_IAM_HOME

Make sure to wipe the snapshotter key file on the remote EC2 instance as well.

Support

If you’re looking for help with AWS IAM, there is a new AWS IAM forum dedicated to the topic.

[Update 2010-11-19: Fix path where new zip file is expanded]

Move a Running EBS Boot Instance to New Hardware on Amazon EC2

NOTE: Though this method works and there is useful information in this article about things you can do with EBS boot instances, there is a simpler way to move an instance to new hardware.

Amazon EC2 has been experiencing some power issues in a portion of one of their many data centers. Even though the relative percentage of people affected might be small, when you have as many customers as AWS does, a small fraction can still be a large absolute number of customers who are affected.

Naturally, some customers will be upset about not having access to their systems, but in the time it takes to write a complaint, you might be able to move your server to new hardware within EC2 and go on with your business

First, lets assume that you are running an EBS boot instance. If you didn’t think they were the way to go before reading this article, I expect to convince you with this one example (and there are a number of other benefits).

Setup

For this demo, I’m going to start an instance. In your situation this instance represents the currently running server that you are depending on and which has valuable software, configuration, and perhaps data on its EBS volume(s). I’m also going to drop a note on the EBS root disk on my running instance so that I know it is the one I wanted to preserve. Again, this is just setting things up for the demo:

# SKIP THIS ENTIRE SECTION IF YOU ALREADY HAVE AN INSTANCE RUNNING
keypair=YOURKEYPAIR
sshkey=.ssh/$keypair.pem # or wherever you keep it
region=us-west-1 # pick your region
zone=${region}a # pick your availability zone
type=m1.small # pick your size
amiid=ami-cb97c68e # Ubuntu 10.04 Lucid, 32-bit, EBS boot in that region
oldinstanceid=$(ec2-run-instances \
  --key $keypair \
  --region $region \
  --availability-zone $zone \
  --instance-type $type \
  $amiid |
  egrep ^INSTANCE | cut -f2)
echo "instanceid=$oldinstanceid"

while host=$(ec2-describe-instances --region $region "$oldinstanceid" | 
  egrep ^INSTANCE | cut -f4) && test -z $host; do echo -n .; sleep 1; done
echo host=$host

echo "save the volume" | ssh -i $sshkey ubuntu@$host tee README.txt

Moving to a New Instance

Now, we pretend that the above created instance has failed in some way and we can no longer access it. Here’s how we get our server running on a new instance:

  1. If you can, stop the old instance. This increases your chance of keeping the file system consistent on the EBS volume. If the instance has really failed and this step does not work, then skip it.

     ec2-stop-instances --region $region $oldinstanceid
    
  2. Run a new instance with the same startup parameters as your old instance.

     newinstanceid=$(ec2-run-instances \
       --key $keypair \
       --region $region \
       --availability-zone $zone \
       --instance-type $type \
       $amiid |
       egrep ^INSTANCE | cut -f2)
     echo "newinstanceid=$newinstanceid"
    
  3. Wait until the new instance is running and then stop (not terminate) the new instance and detach the EBS boot volume from it. Delete the volume as it has nothing of importance, having just been created.

     ec2-stop-instances --region $region $newinstanceid
     newebsroot=$(ec2-describe-instances --region $region $newinstanceid |
       grep ^BLOCKDEVICE | grep /dev/sda1 | cut -f3)
     ec2-detach-volume --force --region $region $newebsroot
     ec2-delete-volume --region $region $newebsroot
    
  4. Detach the old (valuable) EBS root volume from the old (broken) instance.

     oldebsroot=$(ec2-describe-instances --region $region $oldinstanceid |
       grep ^BLOCKDEVICE | grep /dev/sda1 | cut -f3)
     ec2-detach-volume --force --region $region $oldebsroot
    
  5. Attach the old (valuable) EBS root volume to the new (stopped) instance.

     ec2-attach-volume \
       --region $region \
       -d /dev/sda1 \
       -i $newinstanceid \
       $oldebsroot
    

    If you had multiple EBS volumes attached to the old instance, you would move each one over in a similar manner.

  6. Restart the new instance which is now going to boot with the original volume.

     ec2-start-instances --region $region $newinstanceid
    

Voila! You have moved your server from an old, perhaps broken instance to new (or at least different) hardware keeping the same file system, and it took only a few minutes! If you’d like, ssh to the new instance and make sure that your valuable information is still there.

If you had an Elastic IP address associated with the old instance, you would move it to the new instance.

Cleanup

You may terminate the old instance if you are comfortable that you won’t need it any more. If you were following this demo as an exercise, you should also terminate the new instance. Since you manually attached the old volume to the new instance yourself, it will not be deleted automatically when the instance is terminated. You can modify the instance attributes to change the delete-on-termination flag for the volume or simply delete it manually.

# BEWARE! Don't copy these blindly, but think about what you should do
ec2-terminate-instances --region $region $oldinstanceid
ec2-terminate-instances --region $region $newinstanceid
ec2-delete-volume --region $region $oldebsroot

Tips

This above process can also be used when your instance is running fine, but you want to move to a different instance type (size) of the same architecture. For example, you could move from m1.small up to c1.medium, or from m2.4xlarge down to c1.xlarge. Update: I wasn’t thinking clearly when I wrote that last sentence. It is possible to change the instance type much more easily: Simply stop the instance, use ec2-modify-instance-attribute, and start it up again. Read more about this method.

You can also resize the root disk of a running EC2 instance using the same basic principle of swapping out an EBS root volume on a running instance.

[Update 2011-02-07: Point out simpler method to move to new hardware.]
[Update 2012-01-08: Link to article on changing instance type.]

Identifying When a New EBS Volume Has Completed Initialization From an EBS Snapshot

On Amazon EC2, you can create a new EBS volume from an EBS snapshot using a command like

aws ec2 create-volume \
  --availability-zone us-east-1a \
  --snapshot-id snap-d53484bc

This returns almost immediately with a volume id which you can attach and mount on an EC2 instance. The EBS documentation describes the magic behind the immediate availability of the data on the volume:

“New volumes created from existing Amazon S3 snapshots load lazily in the background. This means that once a volume is created from a snapshot, there is no need to wait for all of the data to transfer from Amazon S3 to your Amazon EBS volume before your attached instance can start accessing the volume and all of its data. If your instance accesses a piece of data which hasn’t yet been loaded, the volume will immediately download the requested data from Amazon S3, and then will continue loading the rest of the volume’s data in the background."

This is a cool feature which allows you to start using the volume quickly without waiting for the blocks to be completely populated. In practice, however, there is a period of time where you experience high iowait when your application accesses disk blocks that need to be loaded. This manifests as somewhat to extreme slowness for minutes to hours after the creation of the EBS volume.

For some applications (mine) the initial slowness is acceptable, knowing that it will eventually pick up and perform with the normal EBS access speed once all blocks have been populated. As Clint Popetz pointed out on the ec2ubuntu group, other applications might need to know when the EBS volume has been populated and is ready for high performance usage.

Though there is no API status to poll or trigger to notify you when all the blocks on the EBS volume have been populated, it occurred to me that there was a method which could be used to guarantee that you are waiting until the EBS volume is ready: simply request all of the blocks from the volume and throw them away.

Here’s a simple command which reads all of the blocks on an EBS volume attached to device /dev/xvdX or /dev/sdX (substitute your device name) and does nothing with them:

sudo dd if=/dev/xvdX of=/dev/null bs=10M

OR

sudo dd if=/dev/sdX of=/dev/null bs=10M

By the time this command is done, you know that all of the data blocks have been copied from the EBS snapshot in S3 to the EBS volume. At this point you should be able to start using the EBS volume in your application without the high iowait you would have experienced if you started earlier. Early reports from Clint are that this method works in practice.

[Update 2012-04-02: We now have to support device names like sda1 and xvda1]

[Update 2019-08-07: New aws-cli command format]

Resizing the Root Disk on a Running EBS Boot EC2 Instance

In a previous article I described how to run an EBS boot AMI with a larger root disk size than the default. That’s fine if you know the size you want before running the instance, but what if you have an EC2 instance already running and you need to increase the size of its root disk without running a different instance?

As long as you are ok with a little down time on the EC2 instance (few minutes), it is possible to change out the root EBS volume with a larger copy, without needing to start a new instance.

Let’s walk through the steps on a sample Ubuntu 16.04 LTS Xenial HVM instance. I tested this with ami-40d28157 but check for the latest AMI ids.

On the instance we check the initial size of the root file system (8 GB):

$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  913M  6.5G  13% /
Three Ways to Protect EC2 Instances from Accidental Termination and Loss of Data

Here are a few little-publicized benefits that were launched with Amazon EC2’s new EBS boot instances: You can lock them from being accidentally terminated; you can prevent termination even when you try to shutdown the server from inside the instance; and you can automatically save your data storage when they are terminated.

In discussions with other EC2 users, I’ve found that it is a common feeling of near panic when you go to terminate an instance and you check very carefully to make sure that you are deleting the right instance instead of an active production system. Slightly less common but even worse is the feeling of dread when you realize you just casually terminated the wrong EC2 instance.

It is always recommended to set up your AWS architecture so that you are able to restart production systems on EC2 easily, as they could, in theory, be lost at any point due to hardware failure or other actions. However, new features released with the EBS boot instances help reduce the risks associated with human error.

The following examples will demonstrate with the EC2 API command line tools ec2-run-instances, ec2-modify-instance-attribute, and ec2-terminate-instances. Since AWS is Amazon’s “web service” all of these features are also available through the API and should be coming available (if they aren’t already) in other tools, packages, and interfaces using the web service API.

  1. Shutdown Behavior

First, let’s look at what happens when you run a command like the following in an EC2 instance:

sudo shutdown -h now
# or, equivalently and much easier to type:
sudo halt

Using the legacy S3 based AMIs, either of the above terminates the instance and you lose all local and ephemeral storage (boot disk and /mnt) forever. Hope you remembered to save the important stuff elsewhere!

A shutdown from within an EBS boot instance, by default, will initiate a “stop” instead of a “terminate”. This means that your instance is not in a running state and not getting charged, but the EBS volumes still exist and you can “start” the same instance id again later, losing nothing.

You can explicitly set or change this with the --instance-initiated-shutdown-behavior option in ec2-run-instances. For example:

ec2-run-instances \
  --instance-initiated-shutdown-behavior stop \
  [...]

This is the first safety precaution and, as mentioned, should already be built in, though it doesn’t hurt to document your intentions with an explicit option.

  1. Delete on Termination

Though EBS volumes created and attached to an instance at instantiation are preserved through a “stop”/“start” cycle, by default they are destroyed and lost when an EC2 instance is terminated. This behavior can be changed with a delete-on-termination boolean value buried in the documentation for the --block-device-mapping option of ec2-run-instances.

Here is an example that says “Don’t delete the root EBS volume when this instance is terminated”:

ec2-run-instances \
  --block-device-mapping /dev/sda1=::false \
  [...]

If you are associating other EBS snapshots with the instance at run time, you can also specify that those created EBS volumes should be preserved past the lifetime of the instance:

  --block-device-mapping /dev/sdh=SNAPSHOTID::false

When you use these options, you’ll need to manually clean up the EBS volume(s) if you no longer want to pay for the storage costs after an instance is gone.

Note: EBS volumes attached to an instance after it is already running are, by default, left alone on termination (i.e., not deleted). The default rules are: If the EBS volume is created by the creation of the instance, then the termination of the instance deletes the volumes. If you created the volume explicitly, then you must delete it explicitly.

  1. Disable Termination

This is my favorite new safety feature. Years ago, I asked Amazon for the ability to lock an EC2 instance from being accidentally terminated, and with the launch of EBS boot instances, this is now possible. Using ec2-run-instances, the key option is:

ec2-run-instances \
  --disable-api-termination \
  [...]

Now, if you try to terminate the instance, you will get an error like:

Client.OperationNotPermitted: The instance 'i-XXXXXXXX' may not be terminated.
Modify its 'disableApiTermination' instance attribute and try again.

Yay!

To unlock the instance and allow termination through the API, use a command like:

ec2-modify-instance-attribute \
  --disable-api-termination false \
  INSTANCEID

Then end it all with:

ec2-terminate-instances INSTANCEID

Oh, wait! did you make sure you unlocked and terminated the right instance?! :)

Note: disableApiTermination is also available when you run S3 based AMIs today, but since the instance can still be terminated from inside (shutdown/halt) I am moving towards EBS based instances for all around security.

Put It Together

Here’s a command line I just used to start up an EC2 instance of an Ubuntu 9.10 Karmic EBS boot AMI which I intend to use as a long-term production server with strong uptime and data safety requirements. I wanted to add all the protection available:

ec2-run-instances \
  --key $keypair \
  --availability-zone $availabilityzone \
  --user-data-file $startupscript \
  --block-device-mapping /dev/sda1=::false \
  --block-device-mapping /dev/sdh=$snapshotid::false \
  --instance-initiated-shutdown-behavior stop \
  --disable-api-termination \
  $amiid

With regular EBS snapshots of the volumes, copies to off site backups, and documented/automated restart processes, I feel pretty safe.

Runtime Modification

If you have a valuable running EC2 instance, but forgot to specify the above options to protect it when you started it, or you are now ready to turn a test system into a production system, you can still lock things after the fact using the ec2-modify-instance-attribute command (or equivalent API call).

For example:

ec2-modify-instance-attribute --disable-api-termination true INSTANCEID
ec2-modify-instance-attribute --instance-initiated-shutdown-behavior stop INSTANCEID
ec2-modify-instance-attribute --block-device-mapping /dev/sda1=:false INSTANCEID
ec2-modify-instance-attribute --block-device-mapping /dev/sdh=:false INSTANCEID

Notes:

  • Only one type of option can be specified with each invocation.

  • The --disable-api-termination option has no argument value when used in in ec2-run-instances, but it takes a value of true or false in ec2-modify-instance-attribute.

  • You don’t specify the snapshot id when changing the delete-on-terminate state of an attached EBS volume.

  • You may change the delete-on-terminate to “true” for an EBS volume you created and attached after the instance was running. By default it will not be deleted since you created it.

  • The above --block-device-mapping option requires recent ec2-api-tools to avoid a bug.

You can find out the current state of the options using ec2-describe-instance-attribute, which only takes one option at a time:

ec2-describe-instance-attribute --disable-api-termination INSTANCEID
ec2-describe-instance-attribute --instance-initiated-shutdown-behavior INSTANCEID
ec2-describe-instance-attribute --block-device-mapping INSTANCEID

Unfortunately, the block-device-mapping output does not currently show the state of delete-on-termination value, but thanks to Andrew Lusk for pointing out in a comment below that it is available through the API. Here’s a hack which extracts the information from the debug output:

ec2-describe-instance-attribute -v --block-device-mapping INSTANCEID | 
  perl -0777ne 'print "$1\t$2\t$3\n" while 
  m%<deviceName>(.*?)<.*?<volumeId>(.*?)<.*?<deleteOnTermination>(.*?)<%sg'

While we’re on the topic of EC2 safety, I should mention the well known best practice of separating development and production systems by using a different AWS account for each. Amazon lets you create and use as many accounts as you’d like even with the same credit card as long as you use a unique email address for each. Now that you can share EBS snapshots between accounts, this practice is even more useful.

What additional safety precautions do you take with your EC2 instances and data?

[Update 2011-09-13: Corrected syntax for modifying block-device-mapping on running instance (only one “:")]

ec2-consistent-snapshot release 0.1-9

Thanks to everybody who submitted bug reports and feature requests for ec2-consistent-snapshot, software which can be used to create consistent EBS snapshots on Amazon EC2 especially for use with XFS and/or MySQL.

A new release (0.1-9) has been published to the Alestic PPA. This release provides the following fixes and enhancements:

  • Read MySQL “host” parameter from .my.cnf if provided. (closes: bug#485978)

  • Support quoted values in .my.cnf (closes: bug#454184)

  • Require Net::Amazon::EC2 version 0.11 or higher. (closes: bug#490686)

  • Require xfsprogs package. (closes: bug#493420)

  • Replace “/etc/init.d/mysql stop” with “mysqladmin shutdown”. (closes: bug#497557)

  • Document –description option which was added earlier. (closes: bug#487692)

If you already have ec2-consistent-snapshot installed, you can upgrade using commands like:

sudo apt-get update
sudo apt-get install ec2-consistent-snapshot

If you don’t yet have the Alestic PPA set up, run these commands first:

codename=$(lsb_release -cs)
echo "deb http://ppa.launchpad.net/alestic/ppa/ubuntu $codename main"|
  sudo tee /etc/apt/sources.list.d/alestic-ppa.list    
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys BE09C571

If you find bugs or think of feature requests, please submit them.