Recently in Ubuntu Category

The AWS Lambda Walkthrough 2 uses AWS Lambda to automatically resize images added to one bucket, placing the resulting thumbnails in another bucket. The walkthrough documentation has a mix of aws-cli commands, instructions for hand editing files, and steps requiring the AWS console.

For my personal testing, I converted all of these to command line instructions that can simply be copied and pasted, making them more suitable for adapting into scripts and for eventual automation. I share the results here in case others might find this a faster way to get started with Lambda.

These instructions assume that you have already set up and are using an IAM user / aws-cli profile with admin credentials.

The following is intended as a companion to the Amazon walkthrough documentation, simplifying the execution steps for command line lovers. Read the AWS documentation itself for more details explaining the walkthrough.

Set up

Set up environment variables describing the associated resources:

# Change to your own unique S3 bucket name:
source_bucket=alestic-lambda-example

# Do not change this. Walkthrough code assumes this name
target_bucket=${source_bucket}resized

function=CreateThumbnail
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
lambda_invocation_role_name=lambda-$function-invocation
lambda_invocation_access_policy_name=lambda-$function-invocation-access
log_group_name=/aws/lambda/$function

Install some required software:

sudo apt-get install nodejs nodejs-legacy npm

Step 1.1: Create Buckets and Upload a Sample Object (walkthrough)

Create the buckets:

aws s3 mb s3://$source_bucket
aws s3 mb s3://$target_bucket

Upload a sample photo:

# by Hatalmas: https://www.flickr.com/photos/hatalmas/6094281702
wget -q -OHappyFace.jpg   https://c3.staticflickr.com/7/6209/6094281702_d4ac7290d3_b.jpg

aws s3 cp HappyFace.jpg s3://$source_bucket/

Step 2.1: Create a Lambda Function Deployment Package (walkthrough)

Create the Lambda function nodejs code:

# JavaScript code as listed in walkthrough
wget -q -O $function.js   http://run.alestic.com/lambda/aws-examples/CreateThumbnail.js

Install packages needed by the Lambda function code. Note that this is done under the local directory:

npm install async gm # aws-sdk is not needed

Put all of the required code into a ZIP file, ready for uploading:

zip -r $function.zip $function.js node_modules

Step 2.2: Create an IAM Role for AWS Lambda (walkthrough)

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role   --role-name "$lambda_execution_role_name"   --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }'   --output text   --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. This is slightly tighter than the generic role policy created with the IAM console:

aws iam put-role-policy   --role-name "$lambda_execution_role_name"   --policy-name "$lambda_execution_access_policy_name"   --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": [
          "logs:*"
        ],
        "Resource": "arn:aws:logs:*:*:*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:GetObject"
        ],
        "Resource": "arn:aws:s3:::'$source_bucket'/*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:PutObject"
        ],
        "Resource": "arn:aws:s3:::'$target_bucket'/*"
      }
    ]
  }'

Step 2.3: Upload the Deployment Package and Invoke it Manually (walkthrough)

Upload the Lambda function, specifying the IAM role it should use and other attributes:

# Timeout increased from walkthrough based on experience
aws lambda upload-function   --function-name "$function"   --function-zip "$function.zip"   --role "$lambda_execution_role_arn"   --mode event   --handler "$function.handler"   --timeout 30   --runtime nodejs

Create fake S3 event data to pass to the Lambda function. The key here is the source S3 bucket and key:

cat > $function-data.json <<EOM
{  
   "Records":[  
      {  
         "eventVersion":"2.0",
         "eventSource":"aws:s3",
         "awsRegion":"us-east-1",
         "eventTime":"1970-01-01T00:00:00.000Z",
         "eventName":"ObjectCreated:Put",
         "userIdentity":{  
            "principalId":"AIDAJDPLRKLG7UEXAMPLE"
         },
         "requestParameters":{  
            "sourceIPAddress":"127.0.0.1"
         },
         "responseElements":{  
            "x-amz-request-id":"C3D13FE58DE4C810",
            "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
         },
         "s3":{  
            "s3SchemaVersion":"1.0",
            "configurationId":"testConfigRule",
            "bucket":{  
               "name":"$source_bucket",
               "ownerIdentity":{  
                  "principalId":"A3NL1KOZZKExample"
               },
               "arn":"arn:aws:s3:::$source_bucket"
            },
            "object":{  
               "key":"HappyFace.jpg",
               "size":1024,
               "eTag":"d41d8cd98f00b204e9800998ecf8427e",
               "versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko"
            }
         }
      }
   ]
}
EOM

Invoke the Lambda function, passing in the fake S3 event data:

aws lambda invoke-async   --function-name "$function"   --invoke-args "$function-data.json"

Look in the target bucket for the converted image. It could take a while to show up since the Lambda function is running asynchronously:

aws s3 ls s3://$target_bucket

Look at the Lambda function log output in CloudWatch:

aws logs describe-log-groups   --output text   --query 'logGroups[*].[logGroupName]'

log_stream_names=$(aws logs describe-log-streams   --log-group-name "$log_group_name"   --output text   --query 'logStreams[*].logStreamName')
echo log_stream_names="'$log_stream_names'"
for log_stream_name in $log_stream_names; do
  aws logs get-log-events     --log-group-name "$log_group_name"     --log-stream-name "$log_stream_name"     --output text     --query 'events[*].message'
done | less

Step 3.1: Create an IAM Role for Amazon S3 (walkthrough)

This role may be assumed by S3.

lambda_invocation_role_arn=$(aws iam create-role   --role-name "$lambda_invocation_role_name"   --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "s3.amazonaws.com"
          },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringLike": {
              "sts:ExternalId": "arn:aws:s3:::*"
            }
          }
        }
      ]
    }'   --output text   --query 'Role.Arn'
)
echo lambda_invocation_role_arn=$lambda_invocation_role_arn

S3 may invoke the Lambda function.

aws iam put-role-policy   --role-name "$lambda_invocation_role_name"   --policy-name "$lambda_invocation_access_policy_name"   --policy-document '{
     "Version": "2012-10-17",
     "Statement": [
       {
         "Effect": "Allow",
         "Action": [
           "lambda:InvokeFunction"
         ],
         "Resource": [
           "*"
         ]
       }
     ]
   }'

Step 3.2: Configure a Notification on the Bucket (walkthrough)

Get the Lambda function ARN:

lambda_function_arn=$(aws lambda get-function-configuration   --function-name "$function"   --output text   --query 'FunctionARN'
)
echo lambda_function_arn=$lambda_function_arn

Tell the S3 bucket to invoke the Lambda function when new objects are created (or overwritten):

aws s3api put-bucket-notification   --bucket "$source_bucket"   --notification-configuration '{
    "CloudFunctionConfiguration": {
      "CloudFunction": "'$lambda_function_arn'",
      "InvocationRole": "'$lambda_invocation_role_arn'",
      "Event": "s3:ObjectCreated:*"
    }
  }'

Step 3.3: Test the Setup (walkthrough)

Copy your own jpg and png files into the source bucket:

myimages=...
aws s3 cp $myimages s3://$source_bucket/

Look for the resized images in the target bucket:

aws s3 ls s3://$target_bucket

Check out the environment

These handy commands let you review the related resources in your acccount:

aws lambda list-functions   --output text   --query 'Functions[*].[FunctionName]'

aws lambda get-function   --function-name "$function"

aws iam list-roles   --output text   --query 'Roles[*].[RoleName]'

aws iam get-role   --role-name "$lambda_execution_role_name"   --output json   --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies    --role-name "$lambda_execution_role_name"   --output text   --query 'PolicyNames[*]'

aws iam get-role-policy   --role-name "$lambda_execution_role_name"   --policy-name "$lambda_execution_access_policy_name"   --output json   --query 'PolicyDocument'

aws iam get-role   --role-name "$lambda_invocation_role_name"   --output json   --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies    --role-name "$lambda_invocation_role_name"   --output text   --query 'PolicyNames[*]'

aws iam get-role-policy   --role-name "$lambda_invocation_role_name"   --policy-name "$lambda_invocation_access_policy_name"   --output json   --query 'PolicyDocument'

aws s3api get-bucket-notification   --bucket "$source_bucket"

Clean up

If you are done with the walkthrough, you can delete the created resources:

aws s3 rm s3://$target_bucket/resized-HappyFace.jpg
aws s3 rm s3://$source_bucket/HappyFace.jpg
aws s3 rb s3://$target_bucket/
aws s3 rb s3://$source_bucket/

aws lambda delete-function   --function-name "$function"

aws iam delete-role-policy   --role-name "$lambda_execution_role_name"   --policy-name "$lambda_execution_access_policy_name"

aws iam delete-role   --role-name "$lambda_execution_role_name"

aws iam delete-role-policy   --role-name "$lambda_invocation_role_name"   --policy-name "$lambda_invocation_access_policy_name"

aws iam delete-role   --role-name "$lambda_invocation_role_name"

log_stream_names=$(aws logs describe-log-streams   --log-group-name "$log_group_name"   --output text   --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  echo "deleting log-stream $log_stream_name"
  aws logs delete-log-stream     --log-group-name "$log_group_name"     --log-stream-name "$log_stream_name"
done

aws logs delete-log-group   --log-group-name "$log_group_name"

If you try these instructions, please let me know in the comments where you had trouble or experienced errors.

If you uploaded SSL certificates to Amazon Web Services for ELB (Elastic Load Balancing) or CloudFront (CDN), then you will want to keep an eye on the expiration dates and renew the certificates well before to ensure uninterrupted service.

If you uploaded the SSL certificates yourself, then of course at that time you set an official reminder to make sure that you remembered to renew the certificate. Right?

However, if you inherited an AWS account and want to review your company or client’s configuration, then here’s an easy command to get a list of all SSL certificates in IAM, sorted by expiration date.

aws iam list-server-certificates   --output text   --query 'ServerCertificateMetadataList[*].[Expiration,ServerCertificateName]'   | sort

To get more information on an individual certificate, you might use something like:

certificate_name=...
aws iam get-server-certificate   --server-certificate-name $certificate_name   --output text   --query 'ServerCertificate.CertificateBody' | openssl x509 -text | less

That can let you review information like the DNS name(s) the SSL certificate is good for.

Exercise for the reader: Schedule an automated job that reviews SSL certificate expiration and generates messages to an SNS topic when certificates are near expiration. Subscribe email addresses and other alerting services to the SNS topic.

Read more from Amazon on Managing Server Certificates.

Note: SSL certificates embedded in web server applications running on EC2 instances would have to be checked and updated separately from those stored in AWS.

With Amazon’s announcement that SSD is now available for EBS volumes, they have also declared this the recommended EBS volume type.

The good folks at Canonical are now building Ubuntu AMIs with EBS-SSD boot volumes. In my preliminary tests, running EBS-SSD boot AMIs instead of EBS magnetic boot AMIs speeds up the instance boot time by approximately… a lot.

Canonical now publishes a wide variety of Ubuntu AMIs including:

  • 64-bit, 32-bit
  • EBS-SSD, EBS-SSD pIOPS, EBS-magnetic, instance-store
  • PV, HVM
  • in every EC2 region
  • for every active Ubuntu release

Matrix that out for reasonable combinations and you get 492 AMIs actively supported today.

On the Alestic.com blog, I provide a handy reference to the much smaller set of Ubuntu AMIs that match my generally recommended configurations for most popular uses, specifically:

I list AMIs for both PV and HVM, because different virtualization technologies are required for different EC2 instance types.

Where SSD is not available, I list the magnetic EBS boot AMI (e.g., Ubuntu 10.04 Lucid).

To access this list of recommended AMIs, select an EC2 region in the pulldown menu towards the top right of any page on Alestic.com.

If you like using the AWS console to launch instances, click on the orange launch button to the right of the AMI id.

The AMI ids are automatically updated using an API provided by Canonical, so you always get the freshest released images.

The EC2 create-image API/command/console action is a convenient trigger to create an AMI from a running (or stopped) EBS boot instance. It takes a snapshot of the instance’s EBS volume(s) and registers the snapshot as an AMI. New instances can be run of this AMI with their starting state almost identical to the original running instance.

For years, I’ve been propagating the belief that a create-image call against a running instance is equivalent to these steps:

  1. stop
  2. register-image
  3. start

However, through experimentation I’ve found that though create-image is similar to the above, it doesn’t have all of the effects that a stop/start has on an instance.

Specifically, when you trigger create-image,

  • the Elastic IP address is not disassociated, even if the instance is not in a VPC,

  • the Internal IP address is preserved, and

  • the ephemeral storage (often on /mnt) is not lost.

I have not tested it, but I suspect that a new billing hour is not started with create-image (as it would be with a stop/start).

So, I am now going to start saying that create-image is equivalent to:

  1. shutdown of the OS without stopping the instance - there is no way to do this in EC2 as a standalone operation
  2. register-image
  3. boot of the OS inside the still running instance - also no way to do this yourself.

or:

create-image is a reboot of the instance, with a register-image API call at the point when the OS is shutdown

As far as I’ve been able to tell, the instance stays in the running state the entire time.

I’ve talked before about the difference between a reboot and a stop/start on EC2.

Note: If you want to create an image (AMI) from your running instance, but can’t afford to have it reboot and be out of service for a few minutes, you can specify the no-reboot option.

There is a small risk of the new AMI having a corrupt file system in the rare event that the snapshot was created while the file system on the boot volume was being modified in an unstable state, but I haven’t heard of anybody actually getting bit by this.

If it is important, test the new AMI before depending on it for future use.

use concurrent AWS command line requests to search the world for your instance, image, volume, snapshot, …

Background

Amazon EC2 and many other AWS services are divided up into various regions across the world. Each region is a separate geographic area and is completely independent of other regions.

Though this is a great architecture for preventing global meltdown, it can occasionally make life more difficult for customers, as we must interact with each region separately.

One example of this is when we have the id for an AMI, instance, or other EC2 resource and want to do something with it but don’t know which region it is in.

This happens on ServerFault when a poster presents a problem with an instance, provides the initial AMI id, but forgets to specify the EC2 region. In order to find and examine the AMI, you need to look in each region to discover where it is.

Performance

You’ll hear a repeating theme when discussing performance in AWS:

To save time, run API requests concurrently.

This principle applies perfectly when performing requests across regions.

Parallelizing requests may seem like it would require an advanced programming language, but since I love using command line programs for simple interactive AWS tasks, I’ll present an easy mechanism for concurrent processing that works in bash.

Example

The following sample code finds an AMI using concurrent aws-cli commands to hit all regions in parallel.

id=ami-25b01138 # example AMI id
type=image # or "instance" or "volume" or "snapshot" or ...

regions=$(aws ec2 describe-regions --output text --query 'Regions[*].RegionName')
for region in $regions; do
    (
     aws ec2 describe-${type}s --region $region --$type-ids $id &>/dev/null && 
         echo "$id is in $region"
    ) &
done 2>/dev/null; wait 2>/dev/null

This results in the following output:

ami-25b01138 is in sa-east-1

By running the queries concurrently against all of the regions, we cut the run time by almost 90% and get our result in a second or two.

Drop this into a script, add code to automatically detect the type of the id, and you’ve got a useful command line tool… which you’re probably going to want to immediately rewrite in Python so there’s not quite so much forking going on.

configure your own ssh username in user-data

The official Ubuntu AMIs create a default user with the username ubuntu which is used for the initial ssh access, i.e.:

ssh ubuntu@<HOST>

You can create other users with your preferred usernames using standard Linux commands, but it is difficult to change the ubuntu username while you are logged in to that account since that is one of the checks made by usermod:

$ usermod -l myname ubuntu
usermod: user ubuntu is currently logged in

There are a couple ways to change the username of the default user on a new Ubuntu instance; both passing in special content for the user-data.

Approach 1: CloudInit cloud-config

The CloudInit package supports a special user-data format where you can pass in configuration parameters for the setup. Here is sample user-data (including the comment-like first line) that will set up the first user as ec2-user instead of the default ubuntu username.

#cloud-config
system_info:
  default_user:
    name: ec2-user

Here is a complete example using this cloud-config approach. It assumes you have already uploaded your default ssh key to EC2:

username=ec2-user
ami_id=ami-6d0c2204 # Ubuntu 13.10 Saucy
user_data_file=$(mktemp /tmp/user-data-XXXX.txt)

cat <<EOF >$user_data_file
#cloud-config
system_info:
  default_user:
    name: $username
EOF

instance_id=$(aws ec2 run-instances --user-data file://$user_data_file --key-name $USER --image-id $ami_id --instance-type t1.micro --output text --query 'Instances[*].InstanceId')
rm $user_data_file
echo instance_id=$instance_id

ip_address=$(aws ec2 describe-instances --instance-ids $instance_id --output text --query 'Reservations[*].Instances[*].PublicIpAddress')
echo ip_address=$ip_address

ssh ec2-user@$ip_address

The above cloud-config options do not seem to work for some older versions of Ubuntu including Ubuntu 12.05 LTS Precise, so here is another way to accomplish the same functionality…

Approach 2: user-data script

If you are using an older version of Ubuntu where the above cloud-config approach does not work, then you can change the default ubuntu user to a different username in a user-data script using standard Linux commands.

This approach is also useful if you are already using user-data scripts to do other initialization so you don’t have to mix shell commands and cloud-config directives.

Here’s a sample user-data script that renames the ubuntu user so that you ssh to ec2-user instead.

#!/bin/bash -ex
user=ec2-user
usermod  -l $user ubuntu
groupmod -n $user ubuntu
usermod  -d /home/$user -m $user
if [ -f /etc/sudoers.d/90-cloudimg-ubuntu ]; then
  mv /etc/sudoers.d/90-cloudimg-ubuntu /etc/sudoers.d/90-cloud-init-users
fi
perl -pi -e "s/ubuntu/$user/g;" /etc/sudoers.d/90-cloud-init-users

Here is a complete example using this user-data script approach. It assumes you have already uploaded your default ssh key to EC2:

username=ec2-user
ami_id=ami-6d0c2204 # Ubuntu 13.10 Saucy
user_data_file=$(mktemp /tmp/user-data-XXXX.txt)

cat <<EOF >$user_data_file
#!/bin/bash -ex
user=$username
usermod  -l \$user ubuntu
groupmod -n \$user ubuntu
usermod  -d /home/\$user -m \$user
if [ -f /etc/sudoers.d/90-cloudimg-ubuntu ]; then
  mv /etc/sudoers.d/90-cloudimg-ubuntu /etc/sudoers.d/90-cloud-init-users
fi
perl -pi -e "s/ubuntu/\$user/g;" /etc/sudoers.d/90-cloud-init-users
EOF

instance_id=$(aws ec2 run-instances --user-data file://$user_data_file --key-name $USER --image-id $ami_id --instance-type t1.micro --output text --query 'Instances[*].InstanceId')
rm $user_data_file
echo instance_id=$instance_id

ip_address=$(aws ec2 describe-instances --instance-ids $instance_id --output text --query 'Reservations[*].Instances[*].PublicIpAddress')
echo ip_address=$ip_address

ssh ec2-user@$ip_address

If you include this code in another user-data script, you may want to change the username towards the beginning of the script so that you can log in and monitor progress of the rest of the script.

Clean Up

When you’re done testing, terminate each demo instance.

aws ec2 terminate-instances --instance-ids "$instance_id" --output text --query 'TerminatingInstances[*].CurrentState.Name'

The sample commands in this demo require you to install the aws-cli tool.

Each AMI publisher on EC2 decides what user (or users) should have ssh access enabled by default and what ssh credentials should allow you to gain access as that user.

For the second part, most AMIs allow you to ssh in to the system with the ssh keypair you specified at launch time. This is so common, users often assume that it is built in to EC2 even though it must be enabled by each AMI provider.

Unfortunately, there is no standard ssh username that is used to access EC2 instances across operating systems, distros, and AMI providers.

Here are some of the ssh usernames that I am aware of at this time:

OS/Distro Official AMI
ssh Username
Legacy / Community / Other AMI
ssh Usernames
Amazon Linux ec2-user
Ubuntu ubuntu root
Debian admin root
RHEL 6.4 and later ec2-user
RHEL 6.3 and earlier root
Fedora ec2-user root
Centos root
SUSE root
BitNami bitnami
TurnKey root
NanoStack ubuntu
FreeBSD ec2-user
OmniOS root

Even though the above list will get you in to most official AMIs, there may still be situations where you aren’t quite sure how the AMI was built or what user should be used for ssh.

If you know you have the correct ssh key but don’t know the username, this code can be used to try a number of possibilities, showing which one(s) worked:

host=<IP_ADDRESS>
keyfile=<SSH_KEY_FILE.pem>

for user in root ec2-user ubuntu admin bitnami
do
  if timeout 5 ssh -i $keyfile $user@$host true 2>/dev/null; then
    echo "ssh -i $keyfile $user@$host"
  fi
done

Some AMIs are configured so that an ssh to root@ will output a message informing you the correct user to use and then close the connection. For example,

$ ssh root@<UBUNTUHOST>
Please login as the user "ubuntu" rather than the user "root".

When you ssh to a username other than root, the provided user generally has passwordless sudo access to run commands as the root user. You can use sudo, ssh, and rsync with EC2 hosts in this configuration.

If you know of other common ssh usernames from popular AMI publishers, please add notes in the comments with a link to the appropriate documentation.

Worth switching.

Amazon shared that the new c3.* instance types have been in high demand on EC2 since they were released.

I finally had a minute to take a look at the specs for the c3.* instances which were just announced at AWS re:Invent, and it is obvious why they are popular and why they should probably be even more popular than they are.

Let’s just take a look at the cheapest of these, the c3.large, and compare it to the older generation c1.medium, which is similar in price:

c1.medium c3.large difference
Virtual Cores 2 2
Core Speed 2.5 3.5 +40%
Effective Compute Units 5 7 +40%
Memory 1.7 GB 3.75 GB +120%
Ephemeral Storage 350 GB 32 GB SSD -90%, but much faster
Hourly on-demand Cost (us-east-1) $0.145 $0.15 +3.4%

To summarize:

The c3.large is 40% faster and has more than double the memory than the c1.medium but costs about the same!

In fact, you only pay 12 cents more per day in us-east-1 for the much better c3.large.

I have been a fan of c1.medium for years, but there does not appear to be any reason to use it any more. I’m moving mine to c3.large.

If Amazon does not drastically drop the price on the c1.medium, then it would look like they might be trying to move folks off of the previous generation so that they can retire old hardware and make room in their data centers for newer, faster hardware.

While we’re at it, here’s a comparison of the old c1.xlarge with the new c3.2xlarge which are also about the same cost with similar benefits for switching:

c1.xlarge c3.2xlarge difference
Virtual Cores 8 8
Core Speed 2.5 3.5 +40%
Effective Compute Units 20 28 +40%
Memory 7 GB 15 GB +114%
Ephemeral Storage 1680 GB 160 GB SSD -90%, but much faster
Hourly on-demand Cost (us-east-1) $0.58 $0.60 +3.4%

In completely unrelated news… I’m selling some c1.medium Reserved Instances on the Reserved Instance Marketplace in case anybody is interested in buying them.

Here’s a useful tip mentioned in one of the sessions at AWS re:Invent this year.

There is a little known API call that lets you query some of the EC2 limits/attributes in your account. The API call is DescribeAccountAttributes and you can use the aws-cli to query it from the command line.

For full JSON output:

aws ec2 describe-account-attributes

To query select limits/attributes and output them in a handy table format:

attributes="max-instances max-elastic-ips vpc-max-elastic-ips"
aws ec2 describe-account-attributes --region us-west-2 --attribute-names $attributes --output table --query 'AccountAttributes[*].[AttributeName,AttributeValues[0].AttributeValue]'

Note that the limits vary by region even for a single account, so you can add the --region option:

regions=$(aws ec2 describe-regions --output text --query 'Regions[*].RegionName')
attributes="max-instances max-elastic-ips vpc-max-elastic-ips"
for region in $regions; do
  echo; echo "region=$region"
  aws ec2 describe-account-attributes --region $region --attribute-names $attributes --output text --query 'AccountAttributes[*].[AttributeName,AttributeValues[0].AttributeValue]' |
    tr '\t' '=' | sort
done

Here’s sample output of the above command for a basic account:

region=eu-west-1
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=sa-east-1
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=us-east-1
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=ap-northeast-1
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=us-west-2
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=us-west-1
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=ap-southeast-1
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

region=ap-southeast-2
max-elastic-ips=5
max-instances=20
vpc-max-elastic-ips=5

My favorite session at AWS re:Invent was James Saryerwinnie’s clear, concise, and informative tour of the aws-cli (command line interface), which according to GitHub logs he is enhancing like crazy.

I just learned about a recent addition to aws-cli: The --query option lets you specify what parts of the response data structure you want output.

Instead of wading through pages of JSON output, you can select a few specific values and output them as JSON, table, or simple text. The new --query option is far easier to use than jq, grep+cut, or Perl, my other fallback tools for parsing the output.

aws --query Examples

The following sample aws-cli commands use the --query and --output options to extract the desired output fields so that we can assign them to shell variables:

Run a Ubuntu 12.04 Precise instance and assigns the instance id to a shell variable:

instance_id=$(aws ec2 run-instances --region us-east-1 --key $USER --instance-type t1.micro --image-id ami-d9a98cb0 --output text --query 'Instances[*].InstanceId')
echo instance_id=$instance_id

Wait for the instance to leave the pending state:

while state=$(aws ec2 describe-instances --instance-ids $instance_id --output text --query 'Reservations[*].Instances[*].State.Name'); test "$state" = "pending"; do
  sleep 1; echo -n '.'
done; echo " $state"

Get the IP address of the running instance:

ip_address=$(aws ec2 describe-instances --instance-ids $instance_id --output text --query 'Reservations[*].Instances[*].PublicIpAddress')
echo ip_address=$ip_address

Get the ssh host key fingerprints to compare at ssh time (might take a few minutes for this output to be available):

aws ec2 get-console-output --instance-id $instance_id --output text |
  perl -ne 'print if /BEGIN SSH .* FINGERPRINTS/../END SSH .* FINGERPRINTS/'

ssh to the instance. Check the prompted ssh host key fingerprint against the output above:

ssh ubuntu@$ip_address

Don’t forget to terminate the demo instance:

aws ec2 terminate-instances --instance-ids "$instance_id" --output text --query 'TerminatingInstances[*].CurrentState.Name'

The addition of the --query option greatly improves what was already a fantastic tool in aws-cli.

Note: The commands in this article assume you have already:

Ubuntu AMIs

Ubuntu AMIs for EC2:


More Entries

Reset S3 Object Timestamp for Bucket Lifecycle Expiration
use aws-cli to extend expiration and restart the delete or archive countdown on objects in an S3 bucket Background S3 buckets allow you to specify lifecycle rules that tell AWS…
Installing aws-cli, the New AWS Command Line Tool
consistent control over more AWS services with aws-cli, a single, powerful command line tool from Amazon Readers of this tech blog know that I am a fan of the power…
Using An AWS CloudFormation Stack To Allow "-" Instead Of "+" In Gmail Email Addresses
Launch a CloudFormation template to set up a stack of AWS resources to fill a simple need: Supporting Gmail addresses with “-” instead of “+” separating the user name from…
New Options In ec2-expire-snapshots v0.11
The ec2-expire-snapshots program can be used to expire EBS snapshots in Amazon EC2 on a regular schedule that you define. It can be used as a companion to ec2-consistent-snapshot or…
Email Alerts for AWS Billing Alarms
using CloudWatch and SNS to send yourself email messages when AWS costs accrue past limits you define The Amazon documentation describes how to use the AWS console to monitor your…
Cost of Transitioning S3 Objects to Glacier
how I was surprised by a large AWS charge and how to calculate the break-even point Glacier Archival of S3 Objects Amazon recently introduced a fantastic new feature where S3…
Running Ubuntu on Amazon EC2 in Sydney, Australia
Amazon has announced a new AWS region in Sydney, Australia with the name ap-southeast-2. The official Ubuntu AMI lookup pages (1, 2) don’t seem to be showing the new location…
Save Money by Giving Away Unused Heavy Utilization Reserved Instances
You may be able to save on future EC2 expenses by selling an unused Reserved Instance for less than its true value or even $0.01, provided it is in the…
Installing AWS Command Line Tools from Amazon Downloads
This article describes how to install the old generation of AWS command line tools. For the most part, these have been replaced with the new AWS cli that is…
Convert Running EC2 Instance to EBS-Optimized Instance with Provisioned IOPS EBS Volumes
Amazon just announced two related features for getting super-fast, consistent performance with EBS volumes: (1) Provisioned IOPS EBS volumes, and (2) EBS-Optimized Instances. Starting new instances and EBS volumes with…
Which EC2 Availability Zone is Affected by an Outage?
Did you know that Amazon includes status messages about the health of availability zones in the output of the ec2-describe-availability-zones command, the associated API call, and the AWS console? Right…
Installing AWS Command Line Tools Using Ubuntu Packages
See also: Installing AWS Command Line Tools from Amazon Downloads Here are the steps for installing the AWS command line tools that are currently available as Ubuntu packages. These include:…
Ubuntu Developer Summit, May 2012 (Oakland)
I will be attending the Ubuntu Developer Summit (UDS) next week in Oakland, CA. ┬áThis event brings people from around the world together in one place every six months to…
Uploading Known ssh Host Key in EC2 user-data Script
The ssh protocol uses two different keys to keep you secure: The user ssh key is the one we normally think of. This authenticates us to the remote host, proving…
Seeding Torrents with Amazon S3 and s3cmd on Ubuntu
Amazon Web Services is such a huge, complex service with so many products and features that sometimes very simple but powerful features fall through the cracks when you’re reading the…
CloudCamp
There are a number of CloudCamp events coming up in cities around the world. These are free events, organized around the various concepts, technologies, and services that fall under the…
Use the Same Architecture (64-bit) on All EC2 Instance Types
A few hours ago, Amazon AWS announced that all EC2 instance types can now run 64-bit AMIs. Though t1.micro, m1.small, and c1.medium will continue to also support 32-bit AMIs, it…
ec2-consistent-snapshot on GitHub and v0.43 Released
The source for ec2-conssitent-snapshot has historically been available here: ec2-consistent-snapshot on Launchpad.net using Bazaar For your convenience, it is now also available here: ec2-consistent-snapshot on GitHub using Git You are…
You Should Use EBS Boot Instances on Amazon EC2
EBS boot vs. instance-store If you are just getting started with Amazon EC2, then use EBS boot instances and stop reading this article. Forget that you ever heard about instance-store…
Retrieve Public ssh Key From EC2
A serverfault poster had a problem that I thought was a cool challenge. I had so much fun coming up with this answer, I figured I’d share it here as…
Running EC2 Instances on a Recurring Schedule with Auto Scaling
Do you want to run short jobs on Amazon EC2 on a recurring schedule, but don’t want to pay for an instance running all the time? Would you like to…
AWS Virtual MFA and the Google Authenticator for Android
Amazon just announced that the AWS MFA (multi-factor authentication) now supports virtual or software MFA devices in addition to the physical hardware MFA devices like the one that’s been taking…
Updated EBS boot AMIs for Ubuntu 8.04 Hardy on Amazon EC2 (2011-10-06)
Canonical has released updated instance-store AMIs for Ubuntu 8.04 LTS Hardy on Amazon EC2. Read Ben Howard’s announcement on the ec2ubuntu Google group. I have released corresponding EBS boot AMIs…
New Release of Alestic Git Server
New AMIs have been released for the Alestic Git Server. Major upgrade points include: Base operating system upgraded to Ubuntu 11.04 Natty Git upgraded to version 1.7.4.1 gitolite upgraded to…
Using ServerFault.com for Amazon EC2 Q&A
The Amazon EC2 Forum has been around since the beginning of EC2 and has always been a place where you can get your EC2 questions in front of an audience…
Rebooting vs. Stop/Start of Amazon EC2 Instance
When you reboot a physical computer at your desk it is very similar to shutting down the system, and booting it back up. With Amazon EC2, rebooting an instance is…
Upper Limits on Number of Amazon EC2 Instances by Region
[Update: As predicted, these numbers are already out of date and Amazon has added more public IP address ranges for use by EC2 in various regions.] Each standard Amazon EC2…
Unavailable Availability Zones on Amazon EC2
I’m taking a class about using Chef with EC2 by Florian Drescher today and Florian mentioned that he noticed one of the four availability zones in us-east-1 is not currently…
Desktop AMI login security with NX
Update 2011-08-04: Amazon Security did more research and investigated the desktop AMIs. They have confirmed that their software incorrectly flagged the AMIs (false positive) and they caught it in time…
Updated EBS boot AMIs for Ubuntu 8.04 Hardy on Amazon EC2
For folks still using the old, reliable Ubuntu 8.04 LTS Hardy from 2008, Canonical has released updated AMIs for use on Amazon EC2. Read Scott Moser’s announcement on the ec2ubuntu…