New AMIs have been released for the Alestic Git Server. Major upgrade points include:
New AMIs have been released for the Alestic Git Server. Major upgrade points include:
The Amazon EC2 Forum has been around since the beginning of EC2 and has always been a place where you can get your EC2 questions in front of an audience of experts and Amazon employees.
Though I’m still listed as one of the top posters (scored by questioners marking my answers as helpful) I’ve slowed down my activity on that forum due to the sheer volume of support requests, many of which are commonly asked questions or things that only Amazon can help with.
When you reboot a physical computer at your desk it is very similar to shutting down the system, and booting it back up. With Amazon EC2, rebooting an instance is much the same as with a local physical computer, but a stop/start differs in a few keys ways that may cause some problems and definitely have some benefits.
When you stop an EBS boot instance you are giving up the physical hardware that the server was running on and EC2 is free to start somebody else’s instance there.
Your EBS boot volume (and other attached EBS volumes) are still preserved, though they aren’t really tied to a physical or virtual server. They are just associated with an instance id that isn’t running anywhere.
When you start the instance again, EC2 picks some hardware to run it on, ties in the EBS volume(s) and boots it up again.
Things that change when you stop/start include:
[Update: As predicted, these numbers are already out of date and Amazon has added more public IP address ranges for use by EC2 in various regions.]
Each standard Amazon EC2 instance has a public IP address. This is true for normal instances when they are first brought up and for instances which have had elastic IP addresses assigned to them. Your EC2 instance still has a public IP address even if you have configured the security group so that it cannot be contacted from the Internet, which happens to be the default setting for security groups.
Amazon has made public the EC2 IP address ranges that may be in use for each region.
From this information, we can calculate the absolute upper limit for the number of concurrently running standard EC2 instances that could possibly be supported in each region. At the time of this writing I calculate the hard upper limits to be:
I’m taking a class about using Chef with EC2 by Florian Drescher today and Florian mentioned that he noticed one of the four availability zones in
us-east-1 is not currently available for starting new instances.
I’ve confirmed this in my own AWS accounts and found that one of the three availability zones in
us-west-1 is also unavailable in addition to one of the four availability zones in
Here’s the error I get when I try to start an instance in the availability zone using an old AWS account:
Client.Unsupported: The requested Availability Zone is no longer supported. Please retry your request by not specifying an Availability Zone or choosing us-east-1d, us-east-1a, us-east-1b.
When I use an AWS account I created two days ago, I don’t even see the fourth availability zone at all:
Update 2011-08-04: Amazon Security did more research and investigated the desktop AMIs. They have confirmed that their software incorrectly flagged the AMIs (false positive) and they caught it in time to stop the warning emails from going out to users.
These AMIs include the NX software for remote desktop operation and the way that NX implement login authentication with ssh is convoluted, but secure. I can easily understand why it might have looked like there were potential problems with the AMIs, and I’m glad things turned out well.
As always, hats off to the hard working folks at AWS and thank for all the great products and services.
If Amazon AWS/EC2 contacts you with a warning that one of my AMIs you are running contains a back door security hole with ssh keys or user passwords, please don’t be alarmed.
For folks still using the old, reliable Ubuntu 8.04 LTS Hardy from 2008, Canonical has released updated AMIs for use on Amazon EC2. Read Scott Moser’s announcement on the ec2ubuntu Google group.
Though Canonical publishes both EBS boot and instance-store for recent Ubuntu releases, they only publish instance-store AMIs for the older Ubuntu 8.04, so…
Amazon published a tutorial about best practices in creating public AMIs for use on EC2 last week:
Though the general principles put forth in the tutorial are good, some of the specifics are flawed in how to accomplish those principles. (Comments here relate to the article update from June 7, 2011 3:45 AM GMT.)
The primary message of the article is that you should not publish private information on a public AMI. Excellent advice!
Unfortunately, the article seems to recommend or at least to assume that you are building the public AMI by taking a snapshot of a running instance. Though this method seems an easy way to build an AMI and is fine for private AMIs, it is is a dangerous approach for public AMIs because of how difficult it is to identify private information and to clear that private information from a running system in such a way that it does not leak into the public AMI.
As steady as clockwork, Ubuntu 11.04 Natty is released on the day scheduled at least eleven months ago; and thanks to Canonical, tested AMIs for Natty are already published for use on Amazon EC2.
This article is a followup to Matching EC2 Availability Zones Across AWS Accounts written back in 2009. Please read that article first in order to understand any of what I write here.
Since I wrote that article, Amazon has apparently changed the reserved instance offering ids at least once. I haven’t been tracking this, so I don’t know if this was a one time thing or if the offering ids change on a regular basis.
Interestingly, the offering ids still seem to match across accounts and still map to different availability zones, so perhaps they can still be used to map the true, underlying availability zones as the original article proposes.
To document for posterity and comparison, here are my 2009 availability zone offering ids as calculated by the procedure in the above mentioned article:
Amazon designs availability zones so that it is extremely unlikely that a single failure will take out multiple zones at once. Unfortunately, whatever happened last night seemed to cause problems in all
us-east-1 zones for one of my accounts.
Of the 20+ full time EC2 instances that I am responsible for (including my startup and my personal servers) only one instance was affected. As it turns out it was the one that hosts Alestic.com along with some other services that are important to me personally.
Here are some steps I took in response to the outage:
I’m working on making it easy to start a centralized Git server with an unlimited number of private Git repositories and unlimited users under your control running on an Amazon EC2 instance. I need people who can help test and provide feedback so I can improve the experience and capabilities.
I’ve used a number of different services to host open source software including Launchpad.net, Google Code, and GitHub. Recently, however, I found the need to host a number of different private repositories and I decided to go with the flow and use Git as the software to manage the repositories.
After investigating a dozen Git private repository hosting options, I decided that they were too limiting for my purposes: number of repositories, number of users, size of the repositories, and/or cost. I also had an urge to keep a bit more control over my own repositories by managing them on my own server.
Amazon Web Services has launched a new EC2 region in Tokyo named
ap-northeast-1. Canonical has released new AMIs in this region for the standard Ubuntu releases they are supporting in other regions including AMIs for:
For convenient lookup, the table at the top of Alestic.com reflects the Ubuntu AMIs from Canonical for this and other EC2 regions.
You can examine and edit files on the root EBS volume on an EC2 instance even if you are in what you considered a disastrous situation like:
You lost your ssh key or forgot your password
You made a mistake editing the
/etc/sudoers file and can no longer gain root access with
sudo to fix it
Your long running instance is hung for some reason, cannot be contacted, and fails to boot properly
You need to recover files off of the instance but cannot get to it
On a physical computer sitting at your desk, you could simply boot the system with a CD or USB stick, mount the hard drive, check out and fix the files, then reboot the computer to be back in business.
A remote EC2 instance, however, seems distant and inaccessible when you are in one of these situations. Fortunately, AWS provides us with the power and flexibility to be able to recover a system like this, provided that we are running EBS boot instances and not instance-store.
The approach on EC2 is somewhat similar to the physical solution, but we’re going to move and mount the faulty “hard drive” (root EBS volume) to a different instance, fix it, then move it back.
In some situations, it might simply be easier to start a new EC2 instance and throw away the bad one, but if you really want to fix your files, here is the approach that has worked for many:
ec2-consistent-snapshot is a tool that uses the Amazon EC2 API to initiate a snapshot of an EBS volume with some additional work to help ensure that an XFS file system and/or MySQL database are in a consistent state on that snapshot.
Ahmed Kamal pointed out to me yesterday that we can save lots of trouble installing
ec2-consistent-snapshot by adding a dependency on the new
libnet-amazon-ec2-perl package in Ubuntu instead of forcing people to install the
Net::Amazon::EC2 Perl package through CPAN (not easy for the uninitiated).
I released a new version of
ec2-consistent-snapshot which has this new dependency and updated documentation. Installing this software on Ubuntu 10.04 Lucid, 10.10 Maverick, and the upcoming Natty release is now as easy as: