Upper Limits on Number of Amazon EC2 Instances by Region

[Update: As predicted, these numbers are already out of date and Amazon has added more public IP address ranges for use by EC2 in various regions.]

Each standard Amazon EC2 instance has a public IP address. This is true for normal instances when they are first brought up and for instances which have had elastic IP addresses assigned to them. Your EC2 instance still has a public IP address even if you have configured the security group so that it cannot be contacted from the Internet, which happens to be the default setting for security groups.

Amazon has made public the EC2 IP address ranges that may be in use for each region.

From this information, we can calculate the absolute upper limit for the number of concurrently running standard EC2 instances that could possibly be supported in each region. At the time of this writing I calculate the hard upper limits to be:

Unavailable Availability Zones on Amazon EC2

I’m taking a class about using Chef with EC2 by Florian Drescher today and Florian mentioned that he noticed one of the four availability zones in us-east-1 is not currently available for starting new instances.

I’ve confirmed this in my own AWS accounts and found that one of the three availability zones in us-west-1 is also unavailable in addition to one of the four availability zones in us-east-1.

Here’s the error I get when I try to start an instance in the availability zone using an old AWS account:

Client.Unsupported: The requested Availability Zone is no longer supported. Please retry your request by not specifying an Availability Zone or choosing us-east-1d, us-east-1a, us-east-1b.

When I use an AWS account I created two days ago, I don’t even see the fourth availability zone at all:

Desktop AMI login security with NX

Update 2011-08-04: Amazon Security did more research and investigated the desktop AMIs. They have confirmed that their software incorrectly flagged the AMIs (false positive) and they caught it in time to stop the warning emails from going out to users.

These AMIs include the NX software for remote desktop operation and the way that NX implement login authentication with ssh is convoluted, but secure. I can easily understand why it might have looked like there were potential problems with the AMIs, and I’m glad things turned out well.

As always, hats off to the hard working folks at AWS and thank for all the great products and services.

Original message:

If Amazon AWS/EC2 contacts you with a warning that one of my AMIs you are running contains a back door security hole with ssh keys or user passwords, please don’t be alarmed.

Updated EBS boot AMIs for Ubuntu 8.04 Hardy on Amazon EC2

For folks still using the old, reliable Ubuntu 8.04 LTS Hardy from 2008, Canonical has released updated AMIs for use on Amazon EC2. Read Scott Moser’s announcement on the ec2ubuntu Google group.

Though Canonical publishes both EBS boot and instance-store for recent Ubuntu releases, they only publish instance-store AMIs for the older Ubuntu 8.04, so…

Creating Public AMIs Securely for EC2

Amazon published a tutorial about best practices in creating public AMIs for use on EC2 last week:

How To Share and Use Public AMIs in A Secure Manner

Though the general principles put forth in the tutorial are good, some of the specifics are flawed in how to accomplish those principles. (Comments here relate to the article update from June 7, 2011 3:45 AM GMT.)

The primary message of the article is that you should not publish private information on a public AMI. Excellent advice!

Unfortunately, the article seems to recommend or at least to assume that you are building the public AMI by taking a snapshot of a running instance. Though this method seems an easy way to build an AMI and is fine for private AMIs, it is is a dangerous approach for public AMIs because of how difficult it is to identify private information and to clear that private information from a running system in such a way that it does not leak into the public AMI.

EC2 Reserved Instance Offering IDs Change Over Time

This article is a followup to Matching EC2 Availability Zones Across AWS Accounts written back in 2009. Please read that article first in order to understand any of what I write here.

Since I wrote that article, Amazon has apparently changed the reserved instance offering ids at least once. I haven’t been tracking this, so I don’t know if this was a one time thing or if the offering ids change on a regular basis.

Interestingly, the offering ids still seem to match across accounts and still map to different availability zones, so perhaps they can still be used to map the true, underlying availability zones as the original article proposes.

To document for posterity and comparison, here are my 2009 availability zone offering ids as calculated by the procedure in the above mentioned article:

My Experience With the EC2 Judgment Day Outage

Amazon designs availability zones so that it is extremely unlikely that a single failure will take out multiple zones at once. Unfortunately, whatever happened last night seemed to cause problems in all us-east-1 zones for one of my accounts.

Of the 20+ full time EC2 instances that I am responsible for (including my startup and my personal servers) only one instance was affected. As it turns out it was the one that hosts Alestic.com along with some other services that are important to me personally.

Here are some steps I took in response to the outage:

Alestic Git Server (alpha testing)

I’m working on making it easy to start a centralized Git server with an unlimited number of private Git repositories and unlimited users under your control running on an Amazon EC2 instance. I need people who can help test and provide feedback so I can improve the experience and capabilities.

Background

I’ve used a number of different services to host open source software including Launchpad.net, Google Code, and GitHub. Recently, however, I found the need to host a number of different private repositories and I decided to go with the flow and use Git as the software to manage the repositories.

After investigating a dozen Git private repository hosting options, I decided that they were too limiting for my purposes: number of repositories, number of users, size of the repositories, and/or cost. I also had an urge to keep a bit more control over my own repositories by managing them on my own server[1].

Amazon EC2 Tokyo (ap-northeast-1) and Ubuntu AMIs

Amazon Web Services has launched a new EC2 region in Tokyo named ap-northeast-1. Canonical has released new AMIs in this region for the standard Ubuntu releases they are supporting in other regions including AMIs for:

  • Ubuntu 10.10 Maverick (EBS boot and instance-store)
  • Ubuntu 10.04 LTS Lucid (EBS boot and instance-store)
  • Ubuntu 9.10 Karmic (instance-store only)
  • Ubuntu 8.04 LTS Hardy (instance-store only)

For convenient lookup, the table at the top of Alestic.com reflects the Ubuntu AMIs from Canonical for this and other EC2 regions.

Fixing Files on the Root EBS Volume of an EC2 Instance

You can examine and edit files on the root EBS volume on an EC2 instance even if you are in what you considered a disastrous situation like:

  • You lost your ssh key or forgot your password

  • You made a mistake editing the /etc/sudoers file and can no longer gain root access with sudo to fix it

  • Your long running instance is hung for some reason, cannot be contacted, and fails to boot properly

  • You need to recover files off of the instance but cannot get to it

On a physical computer sitting at your desk, you could simply boot the system with a CD or USB stick, mount the hard drive, check out and fix the files, then reboot the computer to be back in business.

A remote EC2 instance, however, seems distant and inaccessible when you are in one of these situations. Fortunately, AWS provides us with the power and flexibility to be able to recover a system like this, provided that we are running EBS boot instances and not instance-store.

The approach on EC2 is somewhat similar to the physical solution, but we’re going to move and mount the faulty “hard drive” (root EBS volume) to a different instance, fix it, then move it back.

In some situations, it might simply be easier to start a new EC2 instance and throw away the bad one, but if you really want to fix your files, here is the approach that has worked for many:

New Release of ec2-consistent-snapshot and Screencast by Ahmed Kamal

ec2-consistent-snapshot is a tool that uses the Amazon EC2 API to initiate a snapshot of an EBS volume with some additional work to help ensure that an XFS file system and/or MySQL database are in a consistent state on that snapshot.

Ahmed Kamal pointed out to me yesterday that we can save lots of trouble installing ec2-consistent-snapshot by adding a dependency on the new libnet-amazon-ec2-perl package in Ubuntu instead of forcing people to install the Net::Amazon::EC2 Perl package through CPAN (not easy for the uninitiated).

I released a new version of ec2-consistent-snapshot which has this new dependency and updated documentation. Installing this software on Ubuntu 10.04 Lucid, 10.10 Maverick, and the upcoming Natty release is now as easy as:

A Simpler Way To Replace Instance Hardware on EC2

A while back I wrote an article describing a way to move the root EBS volume from one running instance to another. I pitched this as a way to replace the hardware for your instance in the event of failures.

Since then, I have come to the realization that there is a much simpler method to move your instance to new hardware, and I have been using this new method for months when I run into issues that I suspect might be attributed to underlying hardware issues.

This method is so simple, that I am almost embarrassed about having written the previous article, but I’ll point out below at least one benefit that still exists with the more complicated approach.

I now use this process as the second step–after a simple reboot–when I am experiencing odd problems like not being able to connect to a long running EC2 instance. (The zeroth step is to start running and setting up a replacement instance in the event that steps one and two do not produce the desired results.)

Here goes…

Moving an EC2 Instance to a Larger (or Smaller) Instance Type

When you discover that the entry level t1.micro instance size is simply not cutting it for your growing application needs, you may want to try upgrading it to a larger instance type, perhaps an m1.small or even a c1.medium.

Instead of starting a new instance and having to configure it from scratch, you may be able to simply resize the existing instance by asking Amazon move it to better hardware for you. Of course, since this is AWS, you don’t have to actually talk to anybody–just type a few commands and the job is done automatically.

Constraints

Before you try this approach, note that there are some conditions:

Logging user-data Script Output on EC2 Instances

Real time access to user-data script output

The early implementations of user-data scripts on EC2 automatically sent all output of the script (stdout and stderr) to /var/log/syslog as well as to the EC2 console output, to help monitor the startup progress and to debug when things went wrong.

The recent Ubuntu AMIs still send user-data script to the console output, so you can view it remotely, but it is no longer available in syslog on the instance. The console output is only updated a few minutes after the instance boots, reboots, or terminates, which forces you to wait to see the output of the user-data script as well as not capturing output that might come out after the snapshot.

Here is an example written in bash that demonstrates how to send all user-data script stdout and stderr automatically, transparently, and simultaneously to three locations:

Boot EC2 Instance With ssh on Port 80

In a thread on the EC2 forum, Marko describes a situation where an outbound firewall prevents the ability to ssh to port 22, which is the default port on all EC2 instances.

In that thread, Shlomo Swidler proposes creating a user-data script that changes sshd to listen on a port the firewall permits.

Here’s a simple example of a user-data script that does just that. Most outbound firewalls allow traffic to port 80 (web/HTTP), so I use it in this example.

The first step is to create a file containing the user-data script: