Recently in EC2 Category

reduce the risk of losing control of your AWS account by not knowing the root account password

As Amazon states, one of the best practices for using AWS is

Don’t use your AWS root account credentials to access AWS […] Create an IAM user for yourself […], give that IAM user administrative privileges, and use that IAM user for all your work.

The root account credentials are the email address and password that you used when you first registered for AWS. These credentials have the ultimate authority to create and delete IAM users, change billing, close the account, and perform all other actions on your AWS account.

You can create a separate IAM user with near-full permissions for use when you need to perform admin tasks, instead of using the AWS root account. If the credentials for the admin IAM user are compromised, you can use the AWS root account to disable those credentials to prevent further harm, and create new credentials for ongoing use.

However, if the credentials for your AWS root account are compromised, the person who stole them can take over complete control of your account, change the associated email address, and lock you out.

I have consulted companies who lost control over the root AWS account which contained their assets. You want to avoid this.

Proposal

Given:

  • The AWS root account is not required for regular use as long as you have created an IAM user with admin privileges

  • Amazon recommends not using your AWS root account

  • You can’t accidentally expose your AWS root account password if you don’t know it and haven’t saved it anywhere

  • You can always reset your AWS root account password as long as you have access to the email address associated with the account

Consider this approach to improving security:

  1. Create an IAM user with full admin privileges. Use this when you need to do administrative tasks. Activate IAM user access to account billing information for the IAM user to have access to read and modify billing, payment, and account information.

  2. Change the AWS root account password to a long, randomly generated string. Do not save the password. Do not try to remember the password. On Ubuntu, you can use a command like the following to generate a random password for copy/paste into the change password form:

    pwgen -s 24 1
    
  3. If you need access to the AWS root account at some point in the future, use the “Forgot Password” function on the signin form.

It should be clear from this that protecting access to your email account is critical to your overall AWS security, as that is all that is needed to change your password, but that has been true for many online services for many years.

Caveats

You currently need to use the AWS root account in the following situations:

  • to change the email address and password associated with the AWS root account

  • to deactivate IAM user access to account billing information

  • to cancel AWS services (e.g., support)

  • to close the AWS account

  • to buy stuff on Amazon.com, Audible.com, etc. if you are using the same account (not recommended)

  • anything else? Let folks know in the comments.

MFA

For completeness, I should also reiterate Amazon’s constant and strong recommendation to use MFA (multi-factor authentication) on your root AWS account. Consider buying the hardware MFA device, associating it with your root account, then storing it in a lock box with your other important things.

You should also add MFA to your IAM accounts that have AWS console access. For this, I like to use Google Authenticator software running on a locked down mobile phone.

MFA adds a second layer of protection beyond just knowing the password or having access to your email account.

With Amazon’s announcement that SSD is now available for EBS volumes, they have also declared this the recommended EBS volume type.

The good folks at Canonical are now building Ubuntu AMIs with EBS-SSD boot volumes. In my preliminary tests, running EBS-SSD boot AMIs instead of EBS magnetic boot AMIs speeds up the instance boot time by approximately… a lot.

Canonical now publishes a wide variety of Ubuntu AMIs including:

  • 64-bit, 32-bit
  • EBS-SSD, EBS-SSD pIOPS, EBS-magnetic, instance-store
  • PV, HVM
  • in every EC2 region
  • for every active Ubuntu release

Matrix that out for reasonable combinations and you get 492 AMIs actively supported today.

On the Alestic.com blog, I provide a handy reference to the much smaller set of Ubuntu AMIs that match my generally recommended configurations for most popular uses, specifically:

I list AMIs for both PV and HVM, because different virtualization technologies are required for different EC2 instance types.

Where SSD is not available, I list the magnetic EBS boot AMI (e.g., Ubuntu 10.04 Lucid).

To access this list of recommended AMIs, select an EC2 region in the pulldown menu towards the top right of any page on Alestic.com.

If you like using the AWS console to launch instances, click on the orange launch button to the right of the AMI id.

The AMI ids are automatically updated using an API provided by Canonical, so you always get the freshest released images.

The EC2 create-image API/command/console action is a convenient trigger to create an AMI from a running (or stopped) EBS boot instance. It takes a snapshot of the instance’s EBS volume(s) and registers the snapshot as an AMI. New instances can be run of this AMI with their starting state almost identical to the original running instance.

For years, I’ve been propagating the belief that a create-image call against a running instance is equivalent to these steps:

  1. stop
  2. register-image
  3. start

However, through experimentation I’ve found that though create-image is similar to the above, it doesn’t have all of the effects that a stop/start has on an instance.

Specifically, when you trigger create-image,

  • the Elastic IP address is not disassociated, even if the instance is not in a VPC,

  • the Internal IP address is preserved, and

  • the ephemeral storage (often on /mnt) is not lost.

I have not tested it, but I suspect that a new billing hour is not started with create-image (as it would be with a stop/start).

So, I am now going to start saying that create-image is equivalent to:

  1. shutdown of the OS without stopping the instance - there is no way to do this in EC2 as a standalone operation
  2. register-image
  3. boot of the OS inside the still running instance - also no way to do this yourself.

or:

create-image is a reboot of the instance, with a register-image API call at the point when the OS is shutdown

As far as I’ve been able to tell, the instance stays in the running state the entire time.

I’ve talked before about the difference between a reboot and a stop/start on EC2.

Note: If you want to create an image (AMI) from your running instance, but can’t afford to have it reboot and be out of service for a few minutes, you can specify the no-reboot option.

There is a small risk of the new AMI having a corrupt file system in the rare event that the snapshot was created while the file system on the boot volume was being modified in an unstable state, but I haven’t heard of anybody actually getting bit by this.

If it is important, test the new AMI before depending on it for future use.

use concurrent AWS command line requests to search the world for your instance, image, volume, snapshot, …

Background

Amazon EC2 and many other AWS services are divided up into various regions across the world. Each region is a separate geographic area and is completely independent of other regions.

Though this is a great architecture for preventing global meltdown, it can occasionally make life more difficult for customers, as we must interact with each region separately.

One example of this is when we have the id for an AMI, instance, or other EC2 resource and want to do something with it but don’t know which region it is in.

This happens on ServerFault when a poster presents a problem with an instance, provides the initial AMI id, but forgets to specify the EC2 region. In order to find and examine the AMI, you need to look in each region to discover where it is.

Performance

You’ll hear a repeating theme when discussing performance in AWS:

To save time, run API requests concurrently.

This principle applies perfectly when performing requests across regions.

Parallelizing requests may seem like it would require an advanced programming language, but since I love using command line programs for simple interactive AWS tasks, I’ll present an easy mechanism for concurrent processing that works in bash.

Example

The following sample code finds an AMI using concurrent aws-cli commands to hit all regions in parallel.

id=ami-25b01138 # example AMI id
type=image # or "instance" or "volume" or "snapshot" or ...

regions=$(aws ec2 describe-regions --output text --query 'Regions[*].RegionName')
for region in $regions; do
    (
     aws ec2 describe-${type}s --region $region --$type-ids $id &>/dev/null && 
         echo "$id is in $region"
    ) &
done 2>/dev/null; wait 2>/dev/null

This results in the following output:

ami-25b01138 is in sa-east-1

By running the queries concurrently against all of the regions, we cut the run time by almost 90% and get our result in a second or two.

Drop this into a script, add code to automatically detect the type of the id, and you’ve got a useful command line tool… which you’re probably going to want to immediately rewrite in Python so there’s not quite so much forking going on.

configure your own ssh username in user-data

The official Ubuntu AMIs create a default user with the username ubuntu which is used for the initial ssh access, i.e.:

ssh ubuntu@<HOST>

You can create other users with your preferred usernames using standard Linux commands, but it is difficult to change the ubuntu username while you are logged in to that account since that is one of the checks made by usermod:

$ usermod -l myname ubuntu
usermod: user ubuntu is currently logged in

There are a couple ways to change the username of the default user on a new Ubuntu instance; both passing in special content for the user-data.

Approach 1: CloudInit cloud-config

The CloudInit package supports a special user-data format where you can pass in configuration parameters for the setup. Here is sample user-data (including the comment-like first line) that will set up the first user as ec2-user instead of the default ubuntu username.

#cloud-config
system_info:
  default_user:
    name: ec2-user

Here is a complete example using this cloud-config approach. It assumes you have already uploaded your default ssh key to EC2:

username=ec2-user
ami_id=ami-6d0c2204 # Ubuntu 13.10 Saucy
user_data_file=$(mktemp /tmp/user-data-XXXX.txt)

cat <<EOF >$user_data_file
#cloud-config
system_info:
  default_user:
    name: $username
EOF

instance_id=$(aws ec2 run-instances --user-data file://$user_data_file --key-name $USER --image-id $ami_id --instance-type t1.micro --output text --query 'Instances[*].InstanceId')
rm $user_data_file
echo instance_id=$instance_id

ip_address=$(aws ec2 describe-instances --instance-ids $instance_id --output text --query 'Reservations[*].Instances[*].PublicIpAddress')
echo ip_address=$ip_address

ssh ec2-user@$ip_address

The above cloud-config options do not seem to work for some older versions of Ubuntu including Ubuntu 12.05 LTS Precise, so here is another way to accomplish the same functionality…

Approach 2: user-data script

If you are using an older version of Ubuntu where the above cloud-config approach does not work, then you can change the default ubuntu user to a different username in a user-data script using standard Linux commands.

This approach is also useful if you are already using user-data scripts to do other initialization so you don’t have to mix shell commands and cloud-config directives.

Here’s a sample user-data script that renames the ubuntu user so that you ssh to ec2-user instead.

#!/bin/bash -ex
user=ec2-user
usermod  -l $user ubuntu
groupmod -n $user ubuntu
usermod  -d /home/$user -m $user
if [ -f /etc/sudoers.d/90-cloudimg-ubuntu ]; then
  mv /etc/sudoers.d/90-cloudimg-ubuntu /etc/sudoers.d/90-cloud-init-users
fi
perl -pi -e "s/ubuntu/$user/g;" /etc/sudoers.d/90-cloud-init-users

Here is a complete example using this user-data script approach. It assumes you have already uploaded your default ssh key to EC2:

username=ec2-user
ami_id=ami-6d0c2204 # Ubuntu 13.10 Saucy
user_data_file=$(mktemp /tmp/user-data-XXXX.txt)

cat <<EOF >$user_data_file
#!/bin/bash -ex
user=$username
usermod  -l \$user ubuntu
groupmod -n \$user ubuntu
usermod  -d /home/\$user -m \$user
if [ -f /etc/sudoers.d/90-cloudimg-ubuntu ]; then
  mv /etc/sudoers.d/90-cloudimg-ubuntu /etc/sudoers.d/90-cloud-init-users
fi
perl -pi -e "s/ubuntu/\$user/g;" /etc/sudoers.d/90-cloud-init-users
EOF

instance_id=$(aws ec2 run-instances --user-data file://$user_data_file --key-name $USER --image-id $ami_id --instance-type t1.micro --output text --query 'Instances[*].InstanceId')
rm $user_data_file
echo instance_id=$instance_id

ip_address=$(aws ec2 describe-instances --instance-ids $instance_id --output text --query 'Reservations[*].Instances[*].PublicIpAddress')
echo ip_address=$ip_address

ssh ec2-user@$ip_address

If you include this code in another user-data script, you may want to change the username towards the beginning of the script so that you can log in and monitor progress of the rest of the script.

Clean Up

When you’re done testing, terminate each demo instance.

aws ec2 terminate-instances --instance-ids "$instance_id" --output text --query 'TerminatingInstances[*].CurrentState.Name'

The sample commands in this demo require you to install the aws-cli tool.

Each AMI publisher on EC2 decides what user (or users) should have ssh access enabled by default and what ssh credentials should allow you to gain access as that user.

For the second part, most AMIs allow you to ssh in to the system with the ssh keypair you specified at launch time. This is so common, users often assume that it is built in to EC2 even though it must be enabled by each AMI provider.

Unfortunately, there is no standard ssh username that is used to access EC2 instances across operating systems, distros, and AMI providers.

Here are some of the ssh usernames that I am aware of at this time:

OS/Distro Official AMI
ssh Username
Legacy / Community / Other AMI
ssh Usernames
Amazon Linux ec2-user
Ubuntu ubuntu root
Debian admin root
RHEL 6.4 and later ec2-user
RHEL 6.3 and earlier root
Fedora ec2-user root
Centos root
SUSE root
BitNami bitnami
TurnKey root
NanoStack ubuntu
FreeBSD ec2-user
OmniOS root

Even though the above list will get you in to most official AMIs, there may still be situations where you aren’t quite sure how the AMI was built or what user should be used for ssh.

If you know you have the correct ssh key but don’t know the username, this code can be used to try a number of possibilities, showing which one(s) worked:

host=<IP_ADDRESS>
keyfile=<SSH_KEY_FILE.pem>

for user in root ec2-user ubuntu admin bitnami
do
  if timeout 5 ssh -i $keyfile $user@$host true 2>/dev/null; then
    echo "ssh -i $keyfile $user@$host"
  fi
done

Some AMIs are configured so that an ssh to root@ will output a message informing you the correct user to use and then close the connection. For example,

$ ssh root@<UBUNTUHOST>
Please login as the user "ubuntu" rather than the user "root".

When you ssh to a username other than root, the provided user generally has passwordless sudo access to run commands as the root user. You can use sudo, ssh, and rsync with EC2 hosts in this configuration.

If you know of other common ssh usernames from popular AMI publishers, please add notes in the comments with a link to the appropriate documentation.

Worth switching.

Amazon shared that the new c3.* instance types have been in high demand on EC2 since they were released.

I finally had a minute to take a look at the specs for the c3.* instances which were just announced at AWS re:Invent, and it is obvious why they are popular and why they should probably be even more popular than they are.

Let’s just take a look at the cheapest of these, the c3.large, and compare it to the older generation c1.medium, which is similar in price:

c1.medium c3.large difference
Virtual Cores 2 2
Core Speed 2.5 3.5 +40%
Effective Compute Units 5 7 +40%
Memory 1.7 GB 3.75 GB +120%
Ephemeral Storage 350 GB 32 GB SSD -90%, but much faster
Hourly on-demand Cost (us-east-1) $0.145 $0.15 +3.4%

To summarize:

The c3.large is 40% faster and has more than double the memory than the c1.medium but costs about the same!

In fact, you only pay 12 cents more per day in us-east-1 for the much better c3.large.

I have been a fan of c1.medium for years, but there does not appear to be any reason to use it any more. I’m moving mine to c3.large.

If Amazon does not drastically drop the price on the c1.medium, then it would look like they might be trying to move folks off of the previous generation so that they can retire old hardware and make room in their data centers for newer, faster hardware.

While we’re at it, here’s a comparison of the old c1.xlarge with the new c3.2xlarge which are also about the same cost with similar benefits for switching:

c1.xlarge c3.2xlarge difference
Virtual Cores 8 8
Core Speed 2.5 3.5 +40%
Effective Compute Units 20 28 +40%
Memory 7 GB 15 GB +114%
Ephemeral Storage 1680 GB 160 GB SSD -90%, but much faster
Hourly on-demand Cost (us-east-1) $0.58 $0.60 +3.4%

In completely unrelated news… I’m selling some c1.medium Reserved Instances on the Reserved Instance Marketplace in case anybody is interested in buying them.

Here’s a useful tip mentioned in one of the sessions at AWS re:Invent this year.

There is a little known API call that lets you query some of the EC2 limits/attributes in your account. The API call is DescribeAccountAttributes and you can use the aws-cli to query it from the command line.

For full JSON output:

aws ec2 describe-account-attributes

To query select limits/attributes and output them in a handy table format:

attributes="max-instances max-elastic-ips vpc-max-elastic-ips"
aws ec2 describe-account-attributes --region us-west-2 --attribute-names $attributes --output table --query 'AccountAttributes[*].[AttributeName,AttributeValues[0].AttributeValue]'

Note that the limits vary by region even for a single account, so you can add the --region option:

regions=$(aws ec2 describe-regions --output text --query 'Regions[*].RegionName')
attributes="max-instances max-elastic-ips vpc-max-elastic-ips"
for region in $regions; do
  echo; echo "region=$region"
  aws ec2 describe-account-attributes --region $region --attribute-names $attributes --output text --query 'AccountAttributes[*].[AttributeName,AttributeValues[0].AttributeValue]' |
    tr '\t' '=' | sort
done

Here’s sample output of the above command for a basic account:

region=eu-west-1
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=sa-east-1
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=us-east-1
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=ap-northeast-1
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=us-west-2
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=us-west-1
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=ap-southeast-1
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=ap-southeast-2
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

My favorite session at AWS re:Invent was James Saryerwinnie’s clear, concise, and informative tour of the aws-cli (command line interface), which according to GitHub logs he is enhancing like crazy.

I just learned about a recent addition to aws-cli: The --query option lets you specify what parts of the response data structure you want output.

Instead of wading through pages of JSON output, you can select a few specific values and output them as JSON, table, or simple text. The new --query option is far easier to use than jq, grep+cut, or Perl, my other fallback tools for parsing the output.

aws --query Examples

The following sample aws-cli commands use the --query and --output options to extract the desired output fields so that we can assign them to shell variables:

Run a Ubuntu 12.04 Precise instance and assigns the instance id to a shell variable:

instance_id=$(aws ec2 run-instances --region us-east-1 --key $USER --instance-type t1.micro --image-id ami-d9a98cb0 --output text --query 'Instances[*].InstanceId')
echo instance_id=$instance_id

Wait for the instance to leave the pending state:

while state=$(aws ec2 describe-instances --instance-ids $instance_id --output text --query 'Reservations[*].Instances[*].State.Name'); test "$state" = "pending"; do
  sleep 1; echo -n '.'
done; echo " $state"

Get the IP address of the running instance:

ip_address=$(aws ec2 describe-instances --instance-ids $instance_id --output text --query 'Reservations[*].Instances[*].PublicIpAddress')
echo ip_address=$ip_address

Get the ssh host key fingerprints to compare at ssh time (might take a few minutes for this output to be available):

aws ec2 get-console-output --instance-id $instance_id --output text |
  perl -ne 'print if /BEGIN SSH .* FINGERPRINTS/../END SSH .* FINGERPRINTS/'

ssh to the instance. Check the prompted ssh host key fingerprint against the output above:

ssh ubuntu@$ip_address

Don’t forget to terminate the demo instance:

aws ec2 terminate-instances --instance-ids "$instance_id" --output text --query 'TerminatingInstances[*].CurrentState.Name'

The addition of the --query option greatly improves what was already a fantastic tool in aws-cli.

Note: The commands in this article assume you have already:

consistent control over more AWS services with aws-cli, a single, powerful command line tool from Amazon

Readers of this tech blog know that I am a fan of the power of the command line. I enjoy presenting functional command line examples that can be copied and pasted to experience services and features.

The Old World

Users of the various AWS legacy command line tools know that, though they get the job done, they are often inconsistent in where you get them, how you install them, how you pass options, how you provide credentials, and more. Plus, there are only tool sets for a limited number of AWS services.

I wrote an article that demonstrated the simplest approach I use to install and configure the legacy AWS command line tools, and it ended up being extraordinarily long.

I’ve been using the term “legacy” when referring to the various old AWS command line tools, which must mean that there is something to replace them, right?

The New World

The future of the AWS command line tools is aws-cli, a single, unified, consistent command line tool that works with almost all of the AWS services.

Here is a quick list of the services that aws-cli currently supports: Auto Scaling, CloudFormation, CloudSearch, CloudWatch, Data Pipeline, Direct Connect, DynamoDB, EC2, ElastiCache, Elastic Beanstalk, Elastic Transcoder, ELB, EMR, Identity and Access Management, Import/Export, OpsWorks, RDS, Redshift, Route 53, S3, SES, SNS, SQS, Storage Gateway, Security Token Service, Support API, SWF, VPC.

Support for the following appears to be planned: CloudFront, Glacier, SimpleDB.

aws-cli is currently in “developer preview” as it is still being built and has some rough edges, but progress is being made steadily and I’ve already found it extremely useful, especially as it supports AWS services that have no other command line tools.

The aws-cli software is being actively developed as an open source project on Github, with a lot of support from Amazon. You’ll note that the biggest contributors to aws-cli are Amazon employees with Mitch Garnaat leading. Mitch is also the author of boto, the amazing Python library for AWS.

Installing aws-cli

I recommend reading the aws-cli documentation as it has complete instructions for various ways to install and configure the tool, but for convenience, here are the steps I use on Ubuntu:

sudo apt-get install -y python-pip
sudo pip install awscli

Add your Access Key ID and Secret Access Key to$HOME/.aws-config using this format:

[default]
aws_access_key_id = <access key id>
aws_secret_access_key = <secret access key>
region = us-east-1

Protect the config file:

chmod 600 $HOME/.aws-config

Set an environment variable pointing to config file. For future convenience, also add this line to your $HOME/.bashrc

export AWS_CONFIG_FILE=$HOME/.aws-config

Now, wasn’t that a lot easier than installing and configuring all of the old tools?

Testing

Test your installation and configuration:

aws ec2 describe-regions

The default output is in JSON. You can try out other output formats:

 aws ec2 describe-regions --output text
 aws ec2 describe-regions --output table

I posted this brief mention of aws-cli because I expect some of my future articles are going to make use of it instead of the legacy command line tools.

So go ahead and install aws-cli, read the docs, and start to get familiar with this valuable tool.

Notes

Some folks might already have a command line tool installed with the name “aws”. This is likely Tim Kay’s “aws” tool. I would recommend renaming that to another name so that you don’t run into conflicts and confusion with the “aws” command from the aws-cli software.

[Update 2013-10-09: Rename awscli to aws-cli as that seems to be the direction it’s heading.]

Ubuntu AMIs

Ubuntu AMIs for EC2:


More Entries

Using An AWS CloudFormation Stack To Allow "-" Instead Of "+" In Gmail Email Addresses
Launch a CloudFormation template to set up a stack of AWS resources to fill a simple need: Supporting Gmail addresses with “-” instead of “+” separating the user name from…
New Options In ec2-expire-snapshots v0.11
The ec2-expire-snapshots program can be used to expire EBS snapshots in Amazon EC2 on a regular schedule that you define. It can be used as a companion to ec2-consistent-snapshot or…
Email Alerts for AWS Billing Alarms
using CloudWatch and SNS to send yourself email messages when AWS costs accrue past limits you define The Amazon documentation describes how to use the AWS console to monitor your…
Running Ubuntu on Amazon EC2 in Sydney, Australia
Amazon has announced a new AWS region in Sydney, Australia with the name ap-southeast-2. The official Ubuntu AMI lookup pages (1, 2) don’t seem to be showing the new location…
Save Money by Giving Away Unused Heavy Utilization Reserved Instances
You may be able to save on future EC2 expenses by selling an unused Reserved Instance for less than its true value or even $0.01, provided it is in the…
Installing AWS Command Line Tools from Amazon Downloads
When you need an AWS command line toolset not provided by Ubuntu packages, you can download the tools directly from Amazon and install them locally. In a previous article I…
Convert Running EC2 Instance to EBS-Optimized Instance with Provisioned IOPS EBS Volumes
Amazon just announced two related features for getting super-fast, consistent performance with EBS volumes: (1) Provisioned IOPS EBS volumes, and (2) EBS-Optimized Instances. Starting new instances and EBS volumes with…
Which EC2 Availability Zone is Affected by an Outage?
Did you know that Amazon includes status messages about the health of availability zones in the output of the ec2-describe-availability-zones command, the associated API call, and the AWS console? Right…
Installing AWS Command Line Tools Using Ubuntu Packages
See also: Installing AWS Command Line Tools from Amazon Downloads Here are the steps for installing the AWS command line tools that are currently available as Ubuntu packages. These include:…
Ubuntu Developer Summit, May 2012 (Oakland)
I will be attending the Ubuntu Developer Summit (UDS) next week in Oakland, CA. ┬áThis event brings people from around the world together in one place every six months to…
Uploading Known ssh Host Key in EC2 user-data Script
The ssh protocol uses two different keys to keep you secure: The user ssh key is the one we normally think of. This authenticates us to the remote host, proving…
CloudCamp
There are a number of CloudCamp events coming up in cities around the world. These are free events, organized around the various concepts, technologies, and services that fall under the…
Use the Same Architecture (64-bit) on All EC2 Instance Types
A few hours ago, Amazon AWS announced that all EC2 instance types can now run 64-bit AMIs. Though t1.micro, m1.small, and c1.medium will continue to also support 32-bit AMIs, it…
ec2-consistent-snapshot on GitHub and v0.43 Released
The source for ec2-conssitent-snapshot has historically been available here: ec2-consistent-snapshot on Launchpad.net using Bazaar For your convenience, it is now also available here: ec2-consistent-snapshot on GitHub using Git You are…
You Should Use EBS Boot Instances on Amazon EC2
EBS boot vs. instance-store If you are just getting started with Amazon EC2, then use EBS boot instances and stop reading this article. Forget that you ever heard about instance-store…
Retrieve Public ssh Key From EC2
A serverfault poster had a problem that I thought was a cool challenge. I had so much fun coming up with this answer, I figured I’d share it here as…
Running EC2 Instances on a Recurring Schedule with Auto Scaling
Do you want to run short jobs on Amazon EC2 on a recurring schedule, but don’t want to pay for an instance running all the time? Would you like to…
AWS Virtual MFA and the Google Authenticator for Android
Amazon just announced that the AWS MFA (multi-factor authentication) now supports virtual or software MFA devices in addition to the physical hardware MFA devices like the one that’s been taking…
Updated EBS boot AMIs for Ubuntu 8.04 Hardy on Amazon EC2 (2011-10-06)
Canonical has released updated instance-store AMIs for Ubuntu 8.04 LTS Hardy on Amazon EC2. Read Ben Howard’s announcement on the ec2ubuntu Google group. I have released corresponding EBS boot AMIs…
New Release of Alestic Git Server
New AMIs have been released for the Alestic Git Server. Major upgrade points include: Base operating system upgraded to Ubuntu 11.04 Natty Git upgraded to version 1.7.4.1 gitolite upgraded to…
Using ServerFault.com for Amazon EC2 Q&A
The Amazon EC2 Forum has been around since the beginning of EC2 and has always been a place where you can get your EC2 questions in front of an audience…
Rebooting vs. Stop/Start of Amazon EC2 Instance
When you reboot a physical computer at your desk it is very similar to shutting down the system, and booting it back up. With Amazon EC2, rebooting an instance is…
Upper Limits on Number of Amazon EC2 Instances by Region
[Update: As predicted, these numbers are already out of date and Amazon has added more public IP address ranges for use by EC2 in various regions.] Each standard Amazon EC2…
Unavailable Availability Zones on Amazon EC2
I’m taking a class about using Chef with EC2 by Florian Drescher today and Florian mentioned that he noticed one of the four availability zones in us-east-1 is not currently…
Desktop AMI login security with NX
Update 2011-08-04: Amazon Security did more research and investigated the desktop AMIs. They have confirmed that their software incorrectly flagged the AMIs (false positive) and they caught it in time…
Updated EBS boot AMIs for Ubuntu 8.04 Hardy on Amazon EC2
For folks still using the old, reliable Ubuntu 8.04 LTS Hardy from 2008, Canonical has released updated AMIs for use on Amazon EC2. Read Scott Moser’s announcement on the ec2ubuntu…
Creating Public AMIs Securely for EC2
Amazon published a tutorial about best practices in creating public AMIs for use on EC2 last week: How To Share and Use Public AMIs in A Secure Manner Though the…
Canonical Releases Ubuntu 11.04 Natty for Amazon EC2
As steady as clockwork, Ubuntu 11.04 Natty is released on the day scheduled at least eleven months ago; and thanks to Canonical, tested AMIs for Natty are already published for…
EC2 Reserved Instance Offering IDs Change Over Time
This article is a followup to Matching EC2 Availability Zones Across AWS Accounts written back in 2009. Please read that article first in order to understand any of what I…
My Experience With the EC2 Judgment Day Outage
Amazon designs availability zones so that it is extremely unlikely that a single failure will take out multiple zones at once. Unfortunately, whatever happened last night seemed to cause problems…