Persistence Of The AWS Lambda Environment Between Function Invocations

AWS Lambda functions are run inside of an Amazon Linux environment (presumably a container of some sort). Sequential calls to the same Lambda function could hit the same or different instantiations of the environment.

If you hit the same copy (I don’t want to say “instance”) of the Lambda function, then stuff you left in the environment from a previous run might still be available.

This could be useful (think caching) or hurtful (if your code incorrectly expects a fresh start every run).

Here’s an example using lambdash, a hack I wrote that sends shell commands to a Lambda function to be run in the AWS Lambda environment, with stdout/stderr being sent back through S3 and displayed locally.

Before You Buy Amazon EC2 (New) Reserved Instances

understand the commitment you are making to pay for the entire 1-3 years

Amazon just announced a change in the way that Reserved Instances are sold. Instead of selling the old Reserved Instance types:

  • Light Utilization
  • Medium Utilization
  • Heavy Utilization

EC2 is now selling these new Reserved Instance types:

  • No Upfront
  • Partial Upfront
  • All Upfront

Despite the fact that they are still called “Reserved Instances” and that there are three plans which sound like increasing commitment, the are not equivalent and do not map 1-1 old to new. In fact the new Reserved Instances are not even increasing commitment.

You should forget what you knew about Reserved Instances and read all the fine print before making any further Reserved Instance purchases.

One of the big differences between the old and the new is that you are always committing to spend the entire 1-3 years of cost even if you are not running a matching instance during part of that time. This text is buried in the fine print in a “**” footnote towards the bottom of the pricing page:

S3 Bucket Notification to SQS/SNS on Object Creation

A fantastic new and oft-requested AWS feature was released during AWS re:Invent, but has gotten lost in all the hype about AWS Lambda functions being triggered when objects are added to S3 buckets. AWS Lambda is currently in limited Preview mode and you have to request access, but this related feature is already available and ready to use.

I’m talking about automatic S3 bucket notifications to SNS topics and SQS queues when new S3 objects are added.

Unlike AWS Lambda, with S3 bucket notifications you do need to maintain the infrastructure to run your code, but you’re already running EC2 instances for application servers and job processing, so this will fit right in.

To detect and respond to S3 object creation in the past, you needed to either have every process that uploaded to S3 subsequently trigger your back end code in some way, or you needed to poll the S3 bucket to see if new objects had been added. The former adds code complexity and tight coupling dependencies. The latter can be costly in performance and latency, especially as the number of objects in the bucket grows.

With the new S3 bucket notification configuration options, the addition of an object to a bucket can send a message to an SNS topic or to an SQS queue, triggering your code quickly and effortlessly.

Here’s a working example of how to set up and use S3 bucket notification configurations to send messages to SNS on object creation and update.

AWS Lambda: Pay The Same Price For Faster Execution

multiply the speed of compute-intensive Lambda functions without (much) increase in cost

Given:

  • AWS Lambda duration charges are proportional to the requested memory.

  • The CPU power, network, and disk are proportional to the requested memory.

One could conclude that the charges are proportional to the CPU power available to the Lambda function. If the function completion time is inversely proportional to the CPU power allocated (not entirely true), then the cost remains roughly fixed as you dial up power to make it faster.

If your Lambda function is primarily CPU bound and takes at least several hundred ms to execute, then you may find that you can simply allocate more CPU by allocating more memory, and get the same functionality completed in a shorter time period for about the same cost.

Exploring The AWS Lambda Runtime Environment

In the AWS Lambda Shell Hack article, I present a crude hack that lets me run shell commands in the AWS Lambda environment to explore what might be available to Lambda functions running there.

I’ve added a wrapper that lets me type commands on my laptop and see the output of the command run in the Lambda function. This is not production quality software, but you can take a look at it in the alestic/lambdash GitHub repo.

For the curious, here are some results. Please note that this is running on a preview and is in no way a guaranteed part of the environment of a Lambda function. Amazon could change any of it at any time, so don’t build production code using this information.

The version of Amazon Linux:

$ lambdash cat /etc/issue
Amazon Linux AMI release 2014.03
Kernel \r on an \m

The kernel version:

$ lambdash uname -a
Linux ip-10-0-168-157 3.14.19-17.43.amzn1.x86_64 #1 SMP Wed Sep 17 22:14:52 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

The working directory of the Lambda function:

lambdash: AWS Lambda Shell Hack

An updated version of this hack is now available:

https://alestic.com/2015/06/aws-lambda-shell-2/

Please follow the simpler instructions in the above article instead of the obsolete instructions listed below.


I spent the weekend learning just enough JavaScript and nodejs to hack together a Lambda function that runs arbitrary shell commands in the AWS Lambda environment.

This hack allows you to explore the current file system, learn what versions of Perl and Python are available, and discover what packages might be installed.

If you’re interested in seeing the results, then read following article which uses this AWS Lambda shell hack to examine the inside of the AWS Lambda run time environment.

Exploring The AWS Lambda Runtime Environment

Now on to the hack…

AWS Lambda Walkthrough Command Line Companion

The AWS Lambda Walkthrough 2 uses AWS Lambda to automatically resize images added to one bucket, placing the resulting thumbnails in another bucket. The walkthrough documentation has a mix of aws-cli commands, instructions for hand editing files, and steps requiring the AWS console.

For my personal testing, I converted all of these to command line instructions that can simply be copied and pasted, making them more suitable for adapting into scripts and for eventual automation. I share the results here in case others might find this a faster way to get started with Lambda.

These instructions assume that you have already set up and are using an IAM user / aws-cli profile with admin credentials.

The following is intended as a companion to the Amazon walkthrough documentation, simplifying the execution steps for command line lovers. Read the AWS documentation itself for more details explaining the walkthrough.

Set up

Set up environment variables describing the associated resources:

When Are Your SSL Certificates Expiring on AWS?

If you uploaded SSL certificates to Amazon Web Services for ELB (Elastic Load Balancing) or CloudFront (CDN), then you will want to keep an eye on the expiration dates and renew the certificates well before to ensure uninterrupted service.

If you uploaded the SSL certificates yourself, then of course at that time you set an official reminder to make sure that you remembered to renew the certificate. Right?

However, if you inherited an AWS account and want to review your company or client’s configuration, then here’s an easy command to get a list of all SSL certificates in IAM, sorted by expiration date.

aws iam list-server-certificates \
  --output text \
  --query 'ServerCertificateMetadataList[*].[Expiration,ServerCertificateName]' \
  | sort

To get more information on an individual certificate, you might use something like:

Throw Away The Password To Your AWS Account

reduce the risk of losing control of your AWS account by not knowing the root account password

As Amazon states, one of the best practices for using AWS is:

We strongly recommend that you do not use the root user for your everyday tasks, even the administrative ones. Instead, use your root user credentials only to create your IAM admin user. Then securely lock away the root user credentials and use them to perform only a few account and service management tasks.

The root account credentials are the email address and password that you used when you first registered for AWS. These credentials have the ultimate authority to create and delete IAM users, change billing, close the account, and perform all other actions on your AWS account.

You can create a separate IAM user with near-full permissions for use when you need to perform admin tasks, instead of using the AWS root account. If the credentials for the admin IAM user are compromised, you can use the AWS root account to disable those credentials to prevent further harm, and create new credentials for ongoing use.

However, if the credentials for your AWS root account are compromised, the person who stole them can take over complete control of your account, change the associated email address, and lock you out.

I have consulted for companies who lost control over the root AWS account which contained their assets. You want to avoid this.

Proposal

Given:

  • The AWS root account is not required for regular use as long as you have created an IAM user with admin privileges

  • Amazon recommends not using your AWS root account

  • You can’t accidentally expose your AWS root account password if you don’t know it and haven’t saved it anywhere

  • You can always reset your AWS root account password as long as you have access to the email address associated with the account

Consider this approach to improving security:

AWS Community Heroes Program

Amazon Web Services recently announced an AWS Community Heroes Program where they are starting to recognize publicly some of the many individuals around the world who contribute in so many ways to the community that has grown up around the services and products provided by AWS.

It is fun to be part of this community and to share the excitement that so many have experienced as they discover and promote new ways of working and more efficient ways of building projects and companies.

Here are some technologies I have gotten the most excited about over the decades. Each of these changed my life in a significant way as I invested serious time and effort learning and using the technology. The year represents when I started sharing the “good news” of the technology with people around me, who at the time usually couldn’t have cared less.

EBS-SSD Boot AMIs For Ubuntu On Amazon EC2

With Amazon’s announcement that SSD is now available for EBS volumes, they have also declared this the recommended EBS volume type.

The good folks at Canonical are now building Ubuntu AMIs with EBS-SSD boot volumes. In my preliminary tests, running EBS-SSD boot AMIs instead of EBS magnetic boot AMIs speeds up the instance boot time by approximately… a lot.

Canonical now publishes a wide variety of Ubuntu AMIs including:

  • 64-bit, 32-bit
  • EBS-SSD, EBS-SSD pIOPS, EBS-magnetic, instance-store
  • PV, HVM
  • in every EC2 region
  • for every active Ubuntu release

Matrix that out for reasonable combinations and you get 492 AMIs actively supported today.

EC2 create-image Does Not Fully "Stop" The Instance

The EC2 create-image API/command/console action is a convenient trigger to create an AMI from a running (or stopped) EBS boot instance. It takes a snapshot of the instance’s EBS volume(s) and registers the snapshot as an AMI. New instances can be run of this AMI with their starting state almost identical to the original running instance.

For years, I’ve been propagating the belief that a create-image call against a running instance is equivalent to these steps:

  1. stop
  2. register-image
  3. start

However, through experimentation I’ve found that though create-image is similar to the above, it doesn’t have all of the effects that a stop/start has on an instance.

Specifically, when you trigger create-image,

  • the Elastic IP address is not disassociated, even if the instance is not in a VPC,

  • the Internal IP address is preserved, and

  • the ephemeral storage (often on /mnt) is not lost.

I have not tested it, but I suspect that a new billing hour is not started with create-image (as it would be with a stop/start).

So, I am now going to start saying that create-image is equivalent to:

Finding the Region for an AWS Resource ID

use concurrent AWS command line requests to search the world for your instance, image, volume, snapshot, …

Background

Amazon EC2 and many other AWS services are divided up into various regions across the world. Each region is a separate geographic area and is completely independent of other regions.

Though this is a great architecture for preventing global meltdown, it can occasionally make life more difficult for customers, as we must interact with each region separately.

One example of this is when we have the id for an AMI, instance, or other EC2 resource and want to do something with it but don’t know which region it is in.

This happens on ServerFault when a poster presents a problem with an instance, provides the initial AMI id, but forgets to specify the EC2 region. In order to find and examine the AMI, you need to look in each region to discover where it is.

Changing The Default "ubuntu" Username On New EC2 Instances

configure your own ssh username in user-data

The official Ubuntu AMIs create a default user with the username ubuntu which is used for the initial ssh access, i.e.:

ssh ubuntu@<HOST>

You can create other users with your preferred usernames using standard Linux commands, but it is difficult to change the ubuntu username while you are logged in to that account since that is one of the checks made by usermod:

$ usermod -l myname ubuntu
usermod: user ubuntu is currently logged in

There are a couple ways to change the username of the default user on a new Ubuntu instance; both passing in special content for the user-data.

Approach 1: CloudInit cloud-config

Default ssh Usernames For Connecting To EC2 Instances

Each AMI publisher on EC2 decides what user (or users) should have ssh access enabled by default and what ssh credentials should allow you to gain access as that user.

For the second part, most AMIs allow you to ssh in to the system with the ssh keypair you specified at launch time. This is so common, users often assume that it is built in to EC2 even though it must be enabled by each AMI provider.

Unfortunately, there is no standard ssh username that is used to access EC2 instances across operating systems, distros, and AMI providers.

Here are some of the ssh usernames that I am aware of at this time: