Increasing Root Disk Size of an "EBS Boot" AMI on EC2

This article is about running a new EC2 instance with a larger boot volume than the default. You can also resize a running EC2 instance.

Amazon EC2’s new EBS Boot feature not only provides persistent root disks for instances, but also supports root disks larger than the previous limit of 10GB under S3 based AMIs.

Since EBS boot AMIs are implemented by creating a snapshot, the AMI publisher controls the default size of the root disk through the size of the snapshot. There are a number of factors which go into deciding the default root disk size of an EBS boot AMI and some of them conflict.

On the one hand, you want to give users enough free space to run their applications, but on the other hand, you don’t want to increase the cost of running the instance too much. EBS volumes run $0.10 to $0.11 per GB per month depending on the region, or about $10/month for 100GB and $100/month for 1TB.

I suspect the answer to this problem might be for AMI publishers to provide a reasonable low default, perhaps 10GB as per the old standard or 15GB following in the footsteps of Amazon’s first EBS Boot AMIs. This would add $1.00 to $1.50 per month to running the instance which seems negligible for most purposes. Note: There are also IO charges and charges for EBS snapshots, but those are more affected by usage and less by the size of the original volume.

For applications where the EBS boot AMI’s default size is not sufficient, users can increase the root disk size at run time all the way up to 1 TB. Here’s a quick overview of how to do this.

Example

The following demonstrates how to run Amazon’s getting-started-with-ebs-boot AMI increasing the root disk from the default of 15GB up to 100GB.

Before we start, let’s check to see the default size of the root disk in the target AMI and what the device name is:

$ ec2-describe-images ami-b232d0db
IMAGE	ami-b232d0db	amazon/getting-started-with-ebs-boot	amazon	available	public		i386	machine	aki-94c527fd	ari-96c527ff		ebs
BLOCKDEVICEMAPPING	/dev/sda1		snap-a08912c9	15	

We can see the EBS snapshot id snap-a08912c9 and the fact that it is 15 GB attached to /dev/sda1. If we start an instance of this AMI it will have a 15 GB EBS volume as the root disk and we won’t be able to change it once it’s running.

Now let’s run the EBS boot AMI, but we’ll override the default size, specifying 100 GB for the root disk device (/dev/sda1 as seen above):

ec2-run-instances \
  --key KEYPAIR \
  --block-device-mapping /dev/sda1=:100 \
  ami-b232d0db

If we check the EBS volume mapped to the new instance we’ll see that it is 100GB, but when we ssh to the instance and check the root file system size we’ll notice that it is only showing 15 GB:

$ df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              15G  1.6G   13G  12% /

There’s one step left. We need to resize the file system so that it fills up the entire 100 GB EBS volume. Here’s the magic command for ext3. In my early tests it took 2-3 minutes to run. [Update: For Ubuntu 11.04 and later, this step is performed automatically when the AMI is booted and you don’t need to run it manually.]

$ sudo resize2fs /dev/sda1
resize2fs 1.40.4 (31-Dec-2007)
Filesystem at /dev/sda1 is mounted on /; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 7
Performing an on-line resize of /dev/sda1 to 26214400 (4k) blocks.
The filesystem on /dev/sda1 is now 26214400 blocks long.

Finally, we can check to make sure that we’re running on a bigger file system:

$ df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              99G  1.6G   92G   2% /

Note: The output reflects “99” instead of “100” because of slight differences in how df and EBS calculate “GB” (e.g., 1024 MB vs 1000 MB).

XFS

If it were possible to create an EBS boot AMI with an XFS root file system, then the resizing would be near instantaneous using commands like the following. [Update: For Ubuntu 11.04 and later, this step is performed automatically when the AMI is booted and you don’t need to run it manually.]

sudo apt-get update && sudo apt-get install -y xfsprogs
sudo xfs_growfs /

The Ubuntu kernels built for EC2 by Canonical have XFS support built in, so XFS based EBS boot AMIs might be possible. This would also allow for more consistent EBS snapshots.

Toolset

Make sure you are running the latest version of the ec2-run-instances command. The current version can be determined with the command

ec2-version

To use EBS boot features, the version should be at least 1.3-45772.

[Updated 2009-12-11: Switch instructions to default us-east-1 since all regions now support this feature.]
[Updated 2011-12-13: Note that file system resize is done automatically on boot in Ubuntu 11.04 and later.]

Keeping File Ownership (UIDs) Consistent when Using EBS on EC2

Persistent storage on Amazon EC2 is accomplished through the use of Elastic Block Store (EBS) volumes. EBS is basically a storage area network (SAN) and can be thought of as an on-demand, virtual, redundant hard drive plugged in to the server with super-powers like snapshot/restore.

An EBS volume can be detached from one EC2 instance and attached to another. You can create a snapshot of an EBS volume and create new volumes from the snapshot to attach to other instances. Though this flexibility provides some useful abilities, it also presents some challenges.

In particular, the files stored on the EBS volume will be owned by specific numeric UIDs (users) and GIDs (groups). When you fire up and configure a new instance, the UIDs and GIDs on the EBS volume may not exactly match the numeric ids of the users and groups on the new instance, depending on how you set it up.

For example, when you install the MySQL software, the package will generally create a new “mysql” user with the next available UID. If you don’t create the various users in exactly the same order on new instances, you may end up with your database files owned by the “postfix” user instead of the “mysql” user. It’s happened to me and I’m not the only one.

There is a discussion about this topic on the ec2ubuntu Google Group and it has also been raised on Canonical’s EC2 beta mailing list.

Here are some of the different approaches to avoiding or fixing this problem:

  1. Bundle your own AMIs and always run instances of the same AMI when attaching EBS volumes with files. This works if you already have to bundle your AMIs for other reasons, but I often recommend against AMI rebundling because of the efforts involved, lack of reproducibility, and maintenance problems when the base image gets updated or has bugs fixed.

  2. Automate the creation of users and installation of packages in exactly the same order every time. This is likely to give you the same UID/GID values for each user, but it starts to get messy if you end up with an order mixing human users and software package users:

  3. Create all users/groups with hardcoded UIDs/GIDs before installing software packages. If you automate the creation of users and groups you can force the “mysql” and “postfix” users to have a specific UID value. Then you install the MySQL and Postfix packages and the software will use the users which already exist on the system. We ended up following this approach with our EC2 servers at CampusExplorer.com

  4. Correct the ownership of files after mounting the EBS volume. This feels a bit messy to me, but it might be the only option in some cases. I must admit that I’ve done this manually a number of times, but only after finding problems like MySQL not starting because the files aren’t owned by the correct user. For example, say you needed to change files currently owned by “postfix” to be correclty owned by “mysql”:

     find /vol -user postfix -print0 | xargs -0 chown mysql
    

    If you are changing ownership of files after mounting the EBS volume, make sure you do it in an order which does not lose information. For example, if you have to swap “postfix” and “mysql” users, you’ll need to use a temporary third UID as a placeholder.

  5. On the ec2ubuntu Google group it was suggested that a central authority might be a way to solve the problem. I’ve never used this approach on Linux and am not sure how much work it would be setting up a reliable service like this on EC2.

No matter what approach you use, it might be a good idea to add in some checks after you mount an EBS volume to make sure that the files are owned by the appropriate users. For example, you might verify that the mysql directory is owned by the mysql user

Solving this problem is something that I have only begun to work on, so I would appreciate any comments, pointers, and solutions that you may have.

New releases of Ubuntu AMIs for Amazon EC2 2009-04-18 (XFS fixes)

New updates have been released for all* of the Ubuntu and Debian AMIs listed on:

https://alestic.com

The primary enhancements in this release are:

  • The images which were experiencing problems with XFS and the Amazon 2.6.21fc8 kernel have been fixed by installing an XFS kernel module which matches Amazon’s kernel. This includes Ubuntu Intrepid, Ubuntu Jaunty, Debian Lenny, and Debian Squeeze.

  • The Ubuntu 9.04 Jaunty image is using release candidate software. The official Jaunty release is expected April 23.

  • At the request of the Amazon security folks, ssh PasswordAuthentication has been disabled by default on the server images. Even though the base images have passwords disabled on the root account, some folks may be creating accounts with poor passwords susceptible to attacks. The desktop images require password authentication for NX (as far as I know) so please use secure passwords.

  • The desktop images have been upgraded to a recent version of NX Free Edition software.

  • This is the last published image for Ubuntu 7.10 Gutsy. This version has reached its end of life on April 18 and should not be used any more unless you really need to test something on Gutsy and you aren’t going to leave it running long (no security patches available).

All of the AMIs are available in both the US and European regions.

Notes:

  • The Ubuntu 6.10 Edgy, 7.04 Feisty, and 7.10 Gutsy AMIs are obsolete and unsupported. Running these images introduces a security risk as no security patches are being produced any more by Ubuntu.