Increasing Root Disk Size of an "EBS Boot" AMI on EC2

This article is about running a new EC2 instance with a larger boot volume than the default. You can also resize a running EC2 instance.

Amazon EC2’s new EBS Boot feature not only provides persistent root disks for instances, but also supports root disks larger than the previous limit of 10GB under S3 based AMIs.

Since EBS boot AMIs are implemented by creating a snapshot, the AMI publisher controls the default size of the root disk through the size of the snapshot. There are a number of factors which go into deciding the default root disk size of an EBS boot AMI and some of them conflict.

On the one hand, you want to give users enough free space to run their applications, but on the other hand, you don’t want to increase the cost of running the instance too much. EBS volumes run $0.10 to $0.11 per GB per month depending on the region, or about $10/month for 100GB and $100/month for 1TB.

I suspect the answer to this problem might be for AMI publishers to provide a reasonable low default, perhaps 10GB as per the old standard or 15GB following in the footsteps of Amazon’s first EBS Boot AMIs. This would add $1.00 to $1.50 per month to running the instance which seems negligible for most purposes. Note: There are also IO charges and charges for EBS snapshots, but those are more affected by usage and less by the size of the original volume.

For applications where the EBS boot AMI’s default size is not sufficient, users can increase the root disk size at run time all the way up to 1 TB. Here’s a quick overview of how to do this.

Example

The following demonstrates how to run Amazon’s getting-started-with-ebs-boot AMI increasing the root disk from the default of 15GB up to 100GB.

Before we start, let’s check to see the default size of the root disk in the target AMI and what the device name is:

$ ec2-describe-images ami-b232d0db
IMAGE	ami-b232d0db	amazon/getting-started-with-ebs-boot	amazon	available	public		i386	machine	aki-94c527fd	ari-96c527ff		ebs
BLOCKDEVICEMAPPING	/dev/sda1		snap-a08912c9	15	

We can see the EBS snapshot id snap-a08912c9 and the fact that it is 15 GB attached to /dev/sda1. If we start an instance of this AMI it will have a 15 GB EBS volume as the root disk and we won’t be able to change it once it’s running.

Now let’s run the EBS boot AMI, but we’ll override the default size, specifying 100 GB for the root disk device (/dev/sda1 as seen above):

ec2-run-instances \
  --key KEYPAIR \
  --block-device-mapping /dev/sda1=:100 \
  ami-b232d0db

If we check the EBS volume mapped to the new instance we’ll see that it is 100GB, but when we ssh to the instance and check the root file system size we’ll notice that it is only showing 15 GB:

$ df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              15G  1.6G   13G  12% /

There’s one step left. We need to resize the file system so that it fills up the entire 100 GB EBS volume. Here’s the magic command for ext3. In my early tests it took 2-3 minutes to run. [Update: For Ubuntu 11.04 and later, this step is performed automatically when the AMI is booted and you don’t need to run it manually.]

$ sudo resize2fs /dev/sda1
resize2fs 1.40.4 (31-Dec-2007)
Filesystem at /dev/sda1 is mounted on /; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 7
Performing an on-line resize of /dev/sda1 to 26214400 (4k) blocks.
The filesystem on /dev/sda1 is now 26214400 blocks long.

Finally, we can check to make sure that we’re running on a bigger file system:

$ df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              99G  1.6G   92G   2% /

Note: The output reflects “99” instead of “100” because of slight differences in how df and EBS calculate “GB” (e.g., 1024 MB vs 1000 MB).

XFS

If it were possible to create an EBS boot AMI with an XFS root file system, then the resizing would be near instantaneous using commands like the following. [Update: For Ubuntu 11.04 and later, this step is performed automatically when the AMI is booted and you don’t need to run it manually.]

sudo apt-get update && sudo apt-get install -y xfsprogs
sudo xfs_growfs /

The Ubuntu kernels built for EC2 by Canonical have XFS support built in, so XFS based EBS boot AMIs might be possible. This would also allow for more consistent EBS snapshots.

Toolset

Make sure you are running the latest version of the ec2-run-instances command. The current version can be determined with the command

ec2-version

To use EBS boot features, the version should be at least 1.3-45772.

[Updated 2009-12-11: Switch instructions to default us-east-1 since all regions now support this feature.]
[Updated 2011-12-13: Note that file system resize is done automatically on boot in Ubuntu 11.04 and later.]

Encrypting Ephemeral Storage and EBS Volumes on Amazon EC2

Over the years, Amazon has repeatedly recommended that customers who care about the security of their data should consider encrypting information stored on disks, whether ephemeral storage (/mnt) or EBS volumes. This, even though they take pains to ensure that disk blocks are wiped between uses by different customers, and they implement policies which restrict access to disks even by their own employees.

There are a few levels where encryption can take place:

  1. File level. This includes tools like GnuPG, freely available on Ubuntu in the gnupg package. If you use this approach, make sure that you don’t store the unencrypted information on the disk before encrypting it.

  2. File system level. This includes useful packages like encfs which transparently encrypt files before saving to disk, presenting the unencrypted contents in a virtual file system. This can even be used on top of an s3fs file system letting you store encrypted data on S3 with ease.

  3. Block device level. You can place any file system you’d like on top of the encrypted block interface and neither your application nor your file system realize that the hardware disk never sees unencrypted data.

The rest of this article presents a simple way to set up a level of encryption at the block device level using cryptsetup/LUKS. It has been tested on the 32-bit Ubuntu 9.10 Jaunty server AMI listed on https://alestic.com and should work on other Ubuntu AMIs and even other distros with minor changes.

This walkthrough uses the /mnt ephemeral storage, but you can replace /mnt and /dev/sda2 with appropriate mount point and device for 64-bit instance types or EBS volumes.

Setup

Install tools and kernel modules:

sudo apt-get update
sudo apt-get install -y cryptsetup xfsprogs
for i in sha256 dm_crypt xfs; do 
  sudo modprobe $i
  echo $i | sudo tee -a /etc/modules
done

Before you continue, make sure there is nothing valuable on /mnt because we’re going to replace it!

sudo umount /mnt
sudo chmod 000 /mnt

Encrypt the disk and create your favorite file system on it:

sudo luksformat -t xfs /dev/sda2
sudo cryptsetup luksOpen /dev/sda2 crypt-sda2

Remember your passphrase! It is not recoverable!

Update /etc/fstab and replace the /mnt line (or create a new line for an EBS volume):

fstabentry='/dev/mapper/crypt-sda2 /mnt xfs noauto 0 0'
sudo perl -pi -e "s%^.* /mnt .*%$fstabentry%" /etc/fstab

Mount the file system on the encrypted block device:

sudo mount /mnt

You’re now to free to place files on /mnt knowing that the content will be encrypted before it is written to the hardware disk.

After reboot, /mnt will appear empty until you re-mount the encrypted partition, entering your passphrase:

sudo cryptsetup luksOpen /dev/sda2 crypt-sda2
sudo mount /mnt

Notes

See “man cryptsetup” for info on adding keys and getting information from the LUKS disk header.

It is possible to auto-mount the encrypted disk on reboot if you are willing to put your passphrase in the root partition (almost ruins the point of encryption). See the documentation on crypttab and consider adding a line like:

crypt-sda2 /dev/sda2 /PASSPHRASEFILE luks

Study the cryptsetup documentation carefully so that you understand what is going on. Keeping your data private is important, but it’s also important that you know how to get it back in the case of problems.

This article does not attempt to cover all of the possible security considerations you might need to take into account for data leakage on disks. For example, sensitive information might be stored in /tmp, /etc, or log files on the root disk. If you have swap enabled, anything in memory could be saved in the clear to disk whenever the operating system feels like it.

How do you solve your data security challenges on EC2?

This article was based on a post made on the EC2 Ubuntu group.