AWS Lambda Walkthrough Command Line Companion

The AWS Lambda Walkthrough 2 uses AWS Lambda to automatically resize images added to one bucket, placing the resulting thumbnails in another bucket. The walkthrough documentation has a mix of aws-cli commands, instructions for hand editing files, and steps requiring the AWS console.

For my personal testing, I converted all of these to command line instructions that can simply be copied and pasted, making them more suitable for adapting into scripts and for eventual automation. I share the results here in case others might find this a faster way to get started with Lambda.

These instructions assume that you have already set up and are using an IAM user / aws-cli profile with admin credentials.

The following is intended as a companion to the Amazon walkthrough documentation, simplifying the execution steps for command line lovers. Read the AWS documentation itself for more details explaining the walkthrough.

Set up

Set up environment variables describing the associated resources:

When Are Your SSL Certificates Expiring on AWS?

If you uploaded SSL certificates to Amazon Web Services for ELB (Elastic Load Balancing) or CloudFront (CDN), then you will want to keep an eye on the expiration dates and renew the certificates well before to ensure uninterrupted service.

If you uploaded the SSL certificates yourself, then of course at that time you set an official reminder to make sure that you remembered to renew the certificate. Right?

However, if you inherited an AWS account and want to review your company or client’s configuration, then here’s an easy command to get a list of all SSL certificates in IAM, sorted by expiration date.

aws iam list-server-certificates \
  --output text \
  --query 'ServerCertificateMetadataList[*].[Expiration,ServerCertificateName]' \
  | sort

To get more information on an individual certificate, you might use something like:

EBS-SSD Boot AMIs For Ubuntu On Amazon EC2

With Amazon’s announcement that SSD is now available for EBS volumes, they have also declared this the recommended EBS volume type.

The good folks at Canonical are now building Ubuntu AMIs with EBS-SSD boot volumes. In my preliminary tests, running EBS-SSD boot AMIs instead of EBS magnetic boot AMIs speeds up the instance boot time by approximately… a lot.

Canonical now publishes a wide variety of Ubuntu AMIs including:

  • 64-bit, 32-bit
  • EBS-SSD, EBS-SSD pIOPS, EBS-magnetic, instance-store
  • PV, HVM
  • in every EC2 region
  • for every active Ubuntu release

Matrix that out for reasonable combinations and you get 492 AMIs actively supported today.

EC2 create-image Does Not Fully "Stop" The Instance

The EC2 create-image API/command/console action is a convenient trigger to create an AMI from a running (or stopped) EBS boot instance. It takes a snapshot of the instance’s EBS volume(s) and registers the snapshot as an AMI. New instances can be run of this AMI with their starting state almost identical to the original running instance.

For years, I’ve been propagating the belief that a create-image call against a running instance is equivalent to these steps:

  1. stop
  2. register-image
  3. start

However, through experimentation I’ve found that though create-image is similar to the above, it doesn’t have all of the effects that a stop/start has on an instance.

Specifically, when you trigger create-image,

  • the Elastic IP address is not disassociated, even if the instance is not in a VPC,

  • the Internal IP address is preserved, and

  • the ephemeral storage (often on /mnt) is not lost.

I have not tested it, but I suspect that a new billing hour is not started with create-image (as it would be with a stop/start).

So, I am now going to start saying that create-image is equivalent to:

Finding the Region for an AWS Resource ID

use concurrent AWS command line requests to search the world for your instance, image, volume, snapshot, …

Background

Amazon EC2 and many other AWS services are divided up into various regions across the world. Each region is a separate geographic area and is completely independent of other regions.

Though this is a great architecture for preventing global meltdown, it can occasionally make life more difficult for customers, as we must interact with each region separately.

One example of this is when we have the id for an AMI, instance, or other EC2 resource and want to do something with it but don’t know which region it is in.

This happens on ServerFault when a poster presents a problem with an instance, provides the initial AMI id, but forgets to specify the EC2 region. In order to find and examine the AMI, you need to look in each region to discover where it is.

Changing The Default "ubuntu" Username On New EC2 Instances

configure your own ssh username in user-data

The official Ubuntu AMIs create a default user with the username ubuntu which is used for the initial ssh access, i.e.:

ssh ubuntu@<HOST>

You can create other users with your preferred usernames using standard Linux commands, but it is difficult to change the ubuntu username while you are logged in to that account since that is one of the checks made by usermod:

$ usermod -l myname ubuntu
usermod: user ubuntu is currently logged in

There are a couple ways to change the username of the default user on a new Ubuntu instance; both passing in special content for the user-data.

Approach 1: CloudInit cloud-config

Default ssh Usernames For Connecting To EC2 Instances

Each AMI publisher on EC2 decides what user (or users) should have ssh access enabled by default and what ssh credentials should allow you to gain access as that user.

For the second part, most AMIs allow you to ssh in to the system with the ssh keypair you specified at launch time. This is so common, users often assume that it is built in to EC2 even though it must be enabled by each AMI provider.

Unfortunately, there is no standard ssh username that is used to access EC2 instances across operating systems, distros, and AMI providers.

Here are some of the ssh usernames that I am aware of at this time:

New c3.* Instance Types on Amazon EC2 - Nice!

Worth switching.

Amazon shared that the new c3.* instance types have been in high demand on EC2 since they were released.

I finally had a minute to take a look at the specs for the c3.* instances which were just announced at AWS re:Invent, and it is obvious why they are popular and why they should probably be even more popular than they are.

Let’s just take a look at the cheapest of these, the c3.large, and compare it to the older generation c1.medium, which is similar in price:

Query EC2 Account Limits with AWS API

Here’s a useful tip mentioned in one of the sessions at AWS re:Invent this year.

There is a little known API call that lets you query some of the EC2 limits/attributes in your account. The API call is DescribeAccountAttributes and you can use the aws-cli to query it from the command line.

For full JSON output:

aws ec2 describe-account-attributes

To query select limits/attributes and output them in a handy table format:

Using aws-cli --query Option To Simplify Output

My favorite session at AWS re:Invent was James Saryerwinnie’s clear, concise, and informative tour of the aws-cli (command line interface), which according to GitHub logs he is enhancing like crazy.

I just learned about a recent addition to aws-cli: The --query option lets you specify what parts of the response data structure you want output.

Instead of wading through pages of JSON output, you can select a few specific values and output them as JSON, table, or simple text. The new --query option is far easier to use than jq, grep+cut, or Perl, my other fallback tools for parsing the output.

aws --query Examples

The following sample aws-cli commands use the --query and --output options to extract the desired output fields so that we can assign them to shell variables:

Reset S3 Object Timestamp for Bucket Lifecycle Expiration

use aws-cli to extend expiration and restart the delete or archive countdown on objects in an S3 bucket

Background

S3 buckets allow you to specify lifecycle rules that tell AWS to automatically delete or archive any objects in that bucket after a specific number of days. You can also specify a prefix with each rule so that different objects in the same bucket stay for different amounts of time.

Example 1: I created a bucket named logs.example.com (not the real name) that automatically archives an object to AWS Glacier after it has been sitting in S3 for 90 days.

Example 2: I created a bucket named tmp.example.com (not the real name) that automatically delete a file after it has been sitting there for 30 days.

This works great until you realize that there are specific files that you want to keep around for just a bit longer than its original expiration.

You could download and then upload the object to reset its creation date, thus starting the countdown from zero; but through a little experimentation, I found that one easy way to reset the creation/modification timestamp on an S3 object is to ask S3 to change the object storage method to the same storage method it currently has.

The following example uses the new aws-cli command line tool to reset the timestamp of an S3 object, thus restarting the lifecycle counter. This has an effect similar to the Linux/Unix touch command.

Installing aws-cli, the New AWS Command Line Tool

consistent control over more AWS services with aws-cli, a single, powerful command line tool from Amazon

Readers of this tech blog know that I am a fan of the power of the command line. I enjoy presenting functional command line examples that can be copied and pasted to experience services and features.

The Old World

Users of the various AWS legacy command line tools know that, though they get the job done, they are often inconsistent in where you get them, how you install them, how you pass options, how you provide credentials, and more. Plus, there are only tool sets for a limited number of AWS services.

I wrote an article that demonstrated the simplest approach I use to install and configure the legacy AWS command line tools, and it ended up being extraordinarily long.

I’ve been using the term “legacy” when referring to the various old AWS command line tools, which must mean that there is something to replace them, right?

The New World

The future of the AWS command line tools is aws-cli, a single, unified, consistent command line tool that works with almost all of the AWS services.

Using An AWS CloudFormation Stack To Allow "-" Instead Of "+" In Gmail Email Addresses

Launch a CloudFormation template to set up a stack of AWS resources to fill a simple need: Supporting Gmail addresses with “-” instead of “+” separating the user name from the arbitrary tag strings.

The CloudFormation stack launched by the template consists of:

  • ELB (Elastic Load Balancer)
  • Auto Scaling Group
  • EC2 instance(s) running Postfix on Ubuntu set up by a user-data script
  • Security Group allowing ELB to connect to the instances
  • CloudWatch CPU high/low alarms
  • Auto Scaling scale up/down policies.
  • SNS (Simple Notification Service) topic for notification of Auto Scaling events
  • Route53 Record Set

This basic stack structure can be used as a solution for a large number of different needs, but in this example it is set up as an SMTP email relay that filters and translates email addresses for Google Apps for Business customers.

Because it uses Auto Scaling, ELB, and Route53, it is scalable and able to recover from various types of failures.

If you’re in a rush to see code, you can look at the CloudFormation template and the initialization script run from the user-data script.

Now, let’s look a bit more in depth at the problem this is solving and how to set up the solution.

New Options In ec2-expire-snapshots v0.11

The ec2-expire-snapshots program can be used to expire EBS snapshots in Amazon EC2 on a regular schedule that you define. It can be used as a companion to ec2-consistent-snapshot or independently.

There have been two recent submissions to the code from the community that provide new command line options in the latest version (v0.11) of ec2-expire-snapshots.

  1. Wayne Robinson discovered that EC2 sometimes limits the rate at which you can delete snapshots, and submitted code for a new --delete-delay option that tells ec2-expire-snapshots to pause for N seconds between each EBS snapshot deletion.

  2. Anthony Tonns uses EC2’s new feature to copy EBS snapshots from one region to another for redundancy, and found that Amazon does not associate snapshots from the same EBS volume in the source region with the same source volume in the target region. Anthony came up with the idea of putting the source volume id in a tag and submitted code for a new --volume-id-in-tag option that lets you specify the tag name.

Thanks also to varunwy for submitting a patch a while back to clean up some dependencies in the package installation.

On Ubuntu, you can install ec2-expire-snapshots from the Alestic PPA using:

Cost of Transitioning S3 Objects to Glacier

how I was surprised by a large AWS charge and how to calculate the break-even point

Glacier Archival of S3 Objects

Amazon recently introduced a fantastic new feature where S3 objects can be automatically migrated over to Glacier storage based on the S3 bucket, the key prefix, and the number of days after object creation.

This makes it trivially easy to drop files in S3, have fast access to them for a while, then have them automatically saved to long-term storage where they can’t be accessed as quickly, but where the storage charges are around a tenth of the price.

…or so I thought.