multiply the speed of compute-intensive Lambda functions without (much) increase in cost

Given:

  • AWS Lambda duration charges are proportional to the requested memory.

  • The CPU power, network, and disk are poportional to the requested memory.

One could conclude that the charges are proportional to the CPU power available to the Lambda function. If the function completion time is inversely proportional to the CPU power allocated (not entirely true), then the cost remains roughly fixed as you dial up power to make it faster.

If your Lambda function is primarily CPU bound and takes at least several hundred ms to execute, then you may find that you can simply allocate more CPU by allocating more memory, and get the same functionality completed in a shorter time period for about the same cost.

For example, if you allocate 128 MB of memory and your Lambda function takes 10 seconds to run, then you might be able to allocate 640 MB and see it complete in about 2 seconds.

At current AWS Lambda pricing, both of these would cost about $0.02 per thousand invocations, but the second one completes five times faster.

Things that would cause the higher memory/CPU option to cost more in total include:

  • Time chunks are rounded up to the nearest 100 ms. If your Lambda function runs near or under that in less memory, then increasing the CPU allocated will make it return faster, but the rounding up will cause the resulting cost to be more expensive.

  • Doubling the CPU allocated to a Lambda function does not necessarily cut the run time in half. The code might be accessing external resources (e.g., calling S3 APIs) or interacting with disk. If you double the requested CPU, then those fixed time actions will end up costing twice as much.

If you have a slow Lambda function, and it seems that most of its time is probably spent in CPU activities, then it might be worth testing an increase in requested memory to see if you can get it to complete much faster without increasing the cost by much.

I’d love to hear what practical test results people find when comparing different memory/CPU allocation values for the same Lambda function.

In the AWS Lambda Shell Hack article, I present a crude hack that lets me run shell commands in the AWS Lambda environment to explore what might be available to Lambda functions running there.

I’ve added a wrapper that lets me type commands on my laptop and see the output of the command run in the Lambda function. This is not production quality software, but you can take a look at it in the alestic/lambdash GitHub repo.

For the curious, here are some results. Please note that this is running on a preview and is in no way a guaranteed part of the environment of a Lambda function. Amazon could change any of it at any time, so don’t build production code using this information.

The version of Amazon Linux:

$ lambdash cat /etc/issue
Amazon Linux AMI release 2014.03
Kernel \r on an \m

The kernel version:

$ lambdash uname -a
Linux ip-10-0-168-157 3.14.19-17.43.amzn1.x86_64 #1 SMP Wed Sep 17 22:14:52 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

The working directory of the Lambda function:

$ lambdash pwd
/var/task

which contains the unzipped contents of the Lambda function I uploaded:

$ lambdash ls -l
total 12
-rw-rw-r-- 1 slicer 497 5195 Nov 18 05:52 lambdash.js
drwxrwxr-x 5 slicer 497 4096 Nov 18 05:52 node_modules

The user running the Lambda function:

$ lambdash id
uid=495(sbx_user1052) gid=494 groups=494

which is one of one hundred sbx_userNNNN users in /etc/passwd. “sbx_user” presumably stands for “sandbox user”.

The environment variables (in a shell subprocess). This appears to be how AWS Lambda is passing the AWS credentials to the Lambda function.

$ lambdash env
 AWS_SESSION_TOKEN=[ELIDED]
LAMBDA_TASK_ROOT=/var/task
LAMBDA_CONSOLE_SOCKET=14
PATH=/usr/local/bin:/usr/bin:/bin
PWD=/var/task
AWS_SECRET_ACCESS_KEY=[ELIDED]
NODE_PATH=/var/runtime:/var/task:/var/runtime/node_modules
AWS_ACCESS_KEY_ID=[ELIDED]
SHLVL=1
LAMBDA_CONTROL_SOCKET=11
_=/usr/bin/env

The versions of various pre-installed software:

$ lambdash perl -v
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
[...]

$ lambdash python --version
Python 2.6.9

$ lambdash node -v
v0.10.32

Running processes:

$ lambdash ps axu
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
493          1  0.2  0.7 1035300 27080 ?       Ssl  14:26   0:00 node --max-old-space-size=0 --max-new-space-size=0 --max-executable-size=0 /var/runtime/node_modules/.bin/awslambda
493         13  0.0  0.0  13444  1084 ?        R    14:29   0:00 ps axu

The entire file system: 2.5 MB download

 $ lambdash ls -laiR /
 [click link above to download]

Kernel ring buffer: 34K download

$ lambdash dmesg
[click link above to download]

CPU info:

$ lambdash cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model       : 62
model name  : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
stepping    : 4
microcode   : 0x416
cpu MHz     : 2800.110
cache size  : 25600 KB
physical id : 0
siblings    : 2
core id     : 0
cpu cores   : 1
apicid      : 0
initial apicid  : 0
fpu     : yes
fpu_exception   : yes
cpuid level : 13
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms
bogomips    : 5600.22
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
[...]

Installed nodejs modules:

$ dirs=$(lambdash 'echo $NODE_PATH' | tr ':' '\n' | sort)
$ echo $dirs
/var/runtime /var/runtime/node_modules /var/task

$ lambdash 'for dir in '$dirs'; do echo $dir; ls -1 $dir; echo; done'
/var/runtime
node_modules

/var/runtime/node_modules
aws-sdk
awslambda
dynamodb-doc
imagemagick

/var/task # Uploaded in Lambda function ZIP file
lambdash.js
node_modules

Anything else you’d like to see? Suggest commands in the comments on this article.

lambdash: AWS Lambda Shell Hack

| 0 Comments

I spent the weekend learning just enough JavaScript and nodejs to hack together a Lambda function that runs arbitrary shell commands in the AWS Lambda environment.

This hack allows you to explore the current file system, learn what versions of Perl and Python are available, and discover what packages might be installed.

If you’re interested in seeing the results, then read following article which uses this AWS Lambda shell hack to examine the inside of the AWS Lambda run time environment.

Exploring The AWS Lambda Runtime Environment

Now on to the hack…

Setup

Define the basic parameters.

# Replace with your bucket name
bucket_name=lambdash.alestic.com

function=lambdash
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
log_group_name=/aws/lambda/$function

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role   --role-name "$lambda_execution_role_name"   --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
      }]
    }'   --output text   --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. Log to Cloudwatch and upload files to a specific S3 bucket/location.

aws iam put-role-policy   --role-name "$lambda_execution_role_name"   --policy-name "$lambda_execution_access_policy_name"   --policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Effect": "Allow",
          "Action": [ "logs:*" ],
          "Resource": "arn:aws:logs:*:*:*"
      }, {
          "Effect": "Allow",
          "Action": [ "s3:PutObject" ],
          "Resource": "arn:aws:s3:::'$bucket_name'/'$function'/*"
      }]
  }'

Grab the current Lambda function JavaScript from the Alestic lambdash GitHub repository, create the ZIP file, and upload the new Lambda function.

wget -q -O$function.js   https://raw.githubusercontent.com/alestic/lambdash/master/lambdash.js
npm install async fs tmp
zip -r $function.zip $function.js node_modules
aws lambda upload-function   --function-name "$function"   --function-zip "$function.zip"   --runtime nodejs   --mode event   --handler "$function.handler"   --role "$lambda_execution_role_arn"   --timeout 60   --memory-size 256

Invoke the Lambda function with the desired command and S3 output locations. Adjust the command and repeat as desired.

cat > $function-args.json <<EOM
{
    "command": "ls -laiR /",
    "bucket":  "$bucket_name",
    "stdout":  "$function/stdout.txt",
    "stderr":  "$function/stderr.txt"
}
EOM

aws lambda invoke-async   --function-name "$function"   --invoke-args "$function-args.json"

Look at the Lambda function log output in CloudWatch.

log_stream_names=$(aws logs describe-log-streams   --log-group-name "$log_group_name"   --output text   --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  aws logs get-log-events     --log-group-name "$log_group_name"     --log-stream-name "$log_stream_name"     --output text     --query 'events[*].message'
done | less

Get the command output.

aws s3 cp s3://$bucket_name/$function/stdout.txt .
aws s3 cp s3://$bucket_name/$function/stderr.txt .
less stdout.txt stderr.txt

Clean up

If you are done with this example, you can delete the created resources. Or, you can leave the Lambda function in place ready for future use. After all, you aren’t charged unless you use it.

aws s3 rm s3://$bucket_name/$function/stdout.txt
aws s3 rm s3://$bucket_name/$function/stderr.txt
aws lambda delete-function   --function-name "$function"
aws iam delete-role-policy   --role-name "$lambda_execution_role_name"   --policy-name "$lambda_execution_access_policy_name"
aws iam delete-role   --role-name "$lambda_execution_role_name"
aws logs delete-log-group   --log-group-name "$log_group_name"

Requests

What command output would you like to see in the Lambda environment?

The AWS Lambda Walkthrough 2 uses AWS Lambda to automatically resize images added to one bucket, placing the resulting thumbnails in another bucket. The walkthrough documentation has a mix of aws-cli commands, instructions for hand editing files, and steps requiring the AWS console.

For my personal testing, I converted all of these to command line instructions that can simply be copied and pasted, making them more suitable for adapting into scripts and for eventual automation. I share the results here in case others might find this a faster way to get started with Lambda.

These instructions assume that you have already set up and are using an IAM user / aws-cli profile with admin credentials.

The following is intended as a companion to the Amazon walkthrough documentation, simplifying the execution steps for command line lovers. Read the AWS documentation itself for more details explaining the walkthrough.

Set up

Set up environment variables describing the associated resources:

# Change to your own unique S3 bucket name:
source_bucket=alestic-lambda-example

# Do not change this. Walkthrough code assumes this name
target_bucket=${source_bucket}resized

function=CreateThumbnail
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
lambda_invocation_role_name=lambda-$function-invocation
lambda_invocation_access_policy_name=lambda-$function-invocation-access
log_group_name=/aws/lambda/$function

Install some required software:

sudo apt-get install nodejs nodejs-legacy npm

Step 1.1: Create Buckets and Upload a Sample Object (walkthrough)

Create the buckets:

aws s3 mb s3://$source_bucket
aws s3 mb s3://$target_bucket

Upload a sample photo:

# by Hatalmas: https://www.flickr.com/photos/hatalmas/6094281702
wget -q -OHappyFace.jpg   https://c3.staticflickr.com/7/6209/6094281702_d4ac7290d3_b.jpg

aws s3 cp HappyFace.jpg s3://$source_bucket/

Step 2.1: Create a Lambda Function Deployment Package (walkthrough)

Create the Lambda function nodejs code:

# JavaScript code as listed in walkthrough
wget -q -O $function.js   http://run.alestic.com/lambda/aws-examples/CreateThumbnail.js

Install packages needed by the Lambda function code. Note that this is done under the local directory:

npm install async gm # aws-sdk is not needed

Put all of the required code into a ZIP file, ready for uploading:

zip -r $function.zip $function.js node_modules

Step 2.2: Create an IAM Role for AWS Lambda (walkthrough)

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role   --role-name "$lambda_execution_role_name"   --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }'   --output text   --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. This is slightly tighter than the generic role policy created with the IAM console:

aws iam put-role-policy   --role-name "$lambda_execution_role_name"   --policy-name "$lambda_execution_access_policy_name"   --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": [
          "logs:*"
        ],
        "Resource": "arn:aws:logs:*:*:*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:GetObject"
        ],
        "Resource": "arn:aws:s3:::'$source_bucket'/*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:PutObject"
        ],
        "Resource": "arn:aws:s3:::'$target_bucket'/*"
      }
    ]
  }'

Step 2.3: Upload the Deployment Package and Invoke it Manually (walkthrough)

Upload the Lambda function, specifying the IAM role it should use and other attributes:

# Timeout increased from walkthrough based on experience
aws lambda upload-function   --function-name "$function"   --function-zip "$function.zip"   --role "$lambda_execution_role_arn"   --mode event   --handler "$function.handler"   --timeout 30   --runtime nodejs

Create fake S3 event data to pass to the Lambda function. The key here is the source S3 bucket and key:

cat > $function-data.json <<EOM
{  
   "Records":[  
      {  
         "eventVersion":"2.0",
         "eventSource":"aws:s3",
         "awsRegion":"us-east-1",
         "eventTime":"1970-01-01T00:00:00.000Z",
         "eventName":"ObjectCreated:Put",
         "userIdentity":{  
            "principalId":"AIDAJDPLRKLG7UEXAMPLE"
         },
         "requestParameters":{  
            "sourceIPAddress":"127.0.0.1"
         },
         "responseElements":{  
            "x-amz-request-id":"C3D13FE58DE4C810",
            "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
         },
         "s3":{  
            "s3SchemaVersion":"1.0",
            "configurationId":"testConfigRule",
            "bucket":{  
               "name":"$source_bucket",
               "ownerIdentity":{  
                  "principalId":"A3NL1KOZZKExample"
               },
               "arn":"arn:aws:s3:::$source_bucket"
            },
            "object":{  
               "key":"HappyFace.jpg",
               "size":1024,
               "eTag":"d41d8cd98f00b204e9800998ecf8427e",
               "versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko"
            }
         }
      }
   ]
}
EOM

Invoke the Lambda function, passing in the fake S3 event data:

aws lambda invoke-async   --function-name "$function"   --invoke-args "$function-data.json"

Look in the target bucket for the converted image. It could take a while to show up since the Lambda function is running asynchronously:

aws s3 ls s3://$target_bucket

Look at the Lambda function log output in CloudWatch:

aws logs describe-log-groups   --output text   --query 'logGroups[*].[logGroupName]'

log_stream_names=$(aws logs describe-log-streams   --log-group-name "$log_group_name"   --output text   --query 'logStreams[*].logStreamName')
echo log_stream_names="'$log_stream_names'"
for log_stream_name in $log_stream_names; do
  aws logs get-log-events     --log-group-name "$log_group_name"     --log-stream-name "$log_stream_name"     --output text     --query 'events[*].message'
done | less

Step 3.1: Create an IAM Role for Amazon S3 (walkthrough)

This role may be assumed by S3.

lambda_invocation_role_arn=$(aws iam create-role   --role-name "$lambda_invocation_role_name"   --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "s3.amazonaws.com"
          },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringLike": {
              "sts:ExternalId": "arn:aws:s3:::*"
            }
          }
        }
      ]
    }'   --output text   --query 'Role.Arn'
)
echo lambda_invocation_role_arn=$lambda_invocation_role_arn

S3 may invoke the Lambda function.

aws iam put-role-policy   --role-name "$lambda_invocation_role_name"   --policy-name "$lambda_invocation_access_policy_name"   --policy-document '{
     "Version": "2012-10-17",
     "Statement": [
       {
         "Effect": "Allow",
         "Action": [
           "lambda:InvokeFunction"
         ],
         "Resource": [
           "*"
         ]
       }
     ]
   }'

Step 3.2: Configure a Notification on the Bucket (walkthrough)

Get the Lambda function ARN:

lambda_function_arn=$(aws lambda get-function-configuration   --function-name "$function"   --output text   --query 'FunctionARN'
)
echo lambda_function_arn=$lambda_function_arn

Tell the S3 bucket to invoke the Lambda function when new objects are created (or overwritten):

aws s3api put-bucket-notification   --bucket "$source_bucket"   --notification-configuration '{
    "CloudFunctionConfiguration": {
      "CloudFunction": "'$lambda_function_arn'",
      "InvocationRole": "'$lambda_invocation_role_arn'",
      "Event": "s3:ObjectCreated:*"
    }
  }'

Step 3.3: Test the Setup (walkthrough)

Copy your own jpg and png files into the source bucket:

myimages=...
aws s3 cp $myimages s3://$source_bucket/

Look for the resized images in the target bucket:

aws s3 ls s3://$target_bucket

Check out the environment

These handy commands let you review the related resources in your acccount:

aws lambda list-functions   --output text   --query 'Functions[*].[FunctionName]'

aws lambda get-function   --function-name "$function"

aws iam list-roles   --output text   --query 'Roles[*].[RoleName]'

aws iam get-role   --role-name "$lambda_execution_role_name"   --output json   --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies    --role-name "$lambda_execution_role_name"   --output text   --query 'PolicyNames[*]'

aws iam get-role-policy   --role-name "$lambda_execution_role_name"   --policy-name "$lambda_execution_access_policy_name"   --output json   --query 'PolicyDocument'

aws iam get-role   --role-name "$lambda_invocation_role_name"   --output json   --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies    --role-name "$lambda_invocation_role_name"   --output text   --query 'PolicyNames[*]'

aws iam get-role-policy   --role-name "$lambda_invocation_role_name"   --policy-name "$lambda_invocation_access_policy_name"   --output json   --query 'PolicyDocument'

aws s3api get-bucket-notification   --bucket "$source_bucket"

Clean up

If you are done with the walkthrough, you can delete the created resources:

aws s3 rm s3://$target_bucket/resized-HappyFace.jpg
aws s3 rm s3://$source_bucket/HappyFace.jpg
aws s3 rb s3://$target_bucket/
aws s3 rb s3://$source_bucket/

aws lambda delete-function   --function-name "$function"

aws iam delete-role-policy   --role-name "$lambda_execution_role_name"   --policy-name "$lambda_execution_access_policy_name"

aws iam delete-role   --role-name "$lambda_execution_role_name"

aws iam delete-role-policy   --role-name "$lambda_invocation_role_name"   --policy-name "$lambda_invocation_access_policy_name"

aws iam delete-role   --role-name "$lambda_invocation_role_name"

log_stream_names=$(aws logs describe-log-streams   --log-group-name "$log_group_name"   --output text   --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  echo "deleting log-stream $log_stream_name"
  aws logs delete-log-stream     --log-group-name "$log_group_name"     --log-stream-name "$log_stream_name"
done

aws logs delete-log-group   --log-group-name "$log_group_name"

If you try these instructions, please let me know in the comments where you had trouble or experienced errors.

If you uploaded SSL certificates to Amazon Web Services for ELB (Elastic Load Balancing) or CloudFront (CDN), then you will want to keep an eye on the expiration dates and renew the certificates well before to ensure uninterrupted service.

If you uploaded the SSL certificates yourself, then of course at that time you set an official reminder to make sure that you remembered to renew the certificate. Right?

However, if you inherited an AWS account and want to review your company or client’s configuration, then here’s an easy command to get a list of all SSL certificates in IAM, sorted by expiration date.

aws iam list-server-certificates   --output text   --query 'ServerCertificateMetadataList[*].[Expiration,ServerCertificateName]'   | sort

To get more information on an individual certificate, you might use something like:

certificate_name=...
aws iam get-server-certificate   --server-certificate-name $certificate_name   --output text   --query 'ServerCertificate.CertificateBody' | openssl x509 -text | less

That can let you review information like the DNS name(s) the SSL certificate is good for.

Exercise for the reader: Schedule an automated job that reviews SSL certificate expiration and generates messages to an SNS topic when certificates are near expiration. Subscribe email addresses and other alerting services to the SNS topic.

Read more from Amazon on Managing Server Certificates.

Note: SSL certificates embedded in web server applications running on EC2 instances would have to be checked and updated separately from those stored in AWS.

reduce the risk of losing control of your AWS account by not knowing the root account password

As Amazon states, one of the best practices for using AWS is

Don’t use your AWS root account credentials to access AWS […] Create an IAM user for yourself […], give that IAM user administrative privileges, and use that IAM user for all your work.

The root account credentials are the email address and password that you used when you first registered for AWS. These credentials have the ultimate authority to create and delete IAM users, change billing, close the account, and perform all other actions on your AWS account.

You can create a separate IAM user with near-full permissions for use when you need to perform admin tasks, instead of using the AWS root account. If the credentials for the admin IAM user are compromised, you can use the AWS root account to disable those credentials to prevent further harm, and create new credentials for ongoing use.

However, if the credentials for your AWS root account are compromised, the person who stole them can take over complete control of your account, change the associated email address, and lock you out.

I have consulted companies who lost control over the root AWS account which contained their assets. You want to avoid this.

Proposal

Given:

  • The AWS root account is not required for regular use as long as you have created an IAM user with admin privileges

  • Amazon recommends not using your AWS root account

  • You can’t accidentally expose your AWS root account password if you don’t know it and haven’t saved it anywhere

  • You can always reset your AWS root account password as long as you have access to the email address associated with the account

Consider this approach to improving security:

  1. Create an IAM user with full admin privileges. Use this when you need to do administrative tasks. Activate IAM user access to account billing information for the IAM user to have access to read and modify billing, payment, and account information.

  2. Change the AWS root account password to a long, randomly generated string. Do not save the password. Do not try to remember the password. On Ubuntu, you can use a command like the following to generate a random password for copy/paste into the change password form:

    pwgen -s 24 1
    
  3. If you need access to the AWS root account at some point in the future, use the “Forgot Password” function on the signin form.

It should be clear from this that protecting access to your email account is critical to your overall AWS security, as that is all that is needed to change your password, but that has been true for many online services for many years.

Caveats

You currently need to use the AWS root account in the following situations:

  • to change the email address and password associated with the AWS root account

  • to deactivate IAM user access to account billing information

  • to cancel AWS services (e.g., support)

  • to close the AWS account

  • to buy stuff on Amazon.com, Audible.com, etc. if you are using the same account (not recommended)

  • anything else? Let folks know in the comments.

MFA

For completeness, I should also reiterate Amazon’s constant and strong recommendation to use MFA (multi-factor authentication) on your root AWS account. Consider buying the hardware MFA device, associating it with your root account, then storing it in a lock box with your other important things.

You should also add MFA to your IAM accounts that have AWS console access. For this, I like to use Google Authenticator software running on a locked down mobile phone.

MFA adds a second layer of protection beyond just knowing the password or having access to your email account.

AWS Community Heroes Program

| 0 Comments

Amazon Web Services recently announced an AWS Community Heroes Program where they are starting to recognize publicly some of the many individuals around the world who contribute in so many ways to the community that has grown up around the services and products provided by AWS.

It is fun to be part of this community and to share the excitement that so many have experienced as they discover and promote new ways of working and more efficient ways of building projects and companies.

Here are some technologies I have gotten the most excited about over the decades. Each of these changed my life in a significant way as I invested serious time and effort learning and using the technology. The year represents when I started sharing the “good news” of the technology with people around me, who at the time usually couldn’t have cared less.

  • 1980: Computers and Programming - “You can write instructions and the computer does what you tell it to! This is going to be huge!”

  • 1987: The Internet - “You can talk to people around the world, access information that others make available, and publish information for others to access! This is going to be huge!”

  • 1993: The World Wide Web - “You can view remote documents by clicking on hyperlinks, making it super-easy to access information, and publishing is simple! This is going to be huge!”

  • 2007: Amazon Web Services - “You can provision on-demand disposable compute infrastructure from the command line and only pay for what you use! This is going to be huge!”

I feel privileged to have witnessed amazing growth in each of these and look forward to more productive use on all fronts.

There are a ton of local AWS meetups and AWS user groups where you can make contact with other AWS users. AWS often sends employees to speak and share with these groups.

A great way to meet thousands of people in the AWS community (and to spend a few days in intense learning about AWS no matter your current expertise level) is to attend the AWS re:Invent conference in Las Vegas this November. Perhaps I’ll see you there!

With Amazon’s announcement that SSD is now available for EBS volumes, they have also declared this the recommended EBS volume type.

The good folks at Canonical are now building Ubuntu AMIs with EBS-SSD boot volumes. In my preliminary tests, running EBS-SSD boot AMIs instead of EBS magnetic boot AMIs speeds up the instance boot time by approximately… a lot.

Canonical now publishes a wide variety of Ubuntu AMIs including:

  • 64-bit, 32-bit
  • EBS-SSD, EBS-SSD pIOPS, EBS-magnetic, instance-store
  • PV, HVM
  • in every EC2 region
  • for every active Ubuntu release

Matrix that out for reasonable combinations and you get 492 AMIs actively supported today.

On the Alestic.com blog, I provide a handy reference to the much smaller set of Ubuntu AMIs that match my generally recommended configurations for most popular uses, specifically:

I list AMIs for both PV and HVM, because different virtualization technologies are required for different EC2 instance types.

Where SSD is not available, I list the magnetic EBS boot AMI (e.g., Ubuntu 10.04 Lucid).

To access this list of recommended AMIs, select an EC2 region in the pulldown menu towards the top right of any page on Alestic.com.

If you like using the AWS console to launch instances, click on the orange launch button to the right of the AMI id.

The AMI ids are automatically updated using an API provided by Canonical, so you always get the freshest released images.

The EC2 create-image API/command/console action is a convenient trigger to create an AMI from a running (or stopped) EBS boot instance. It takes a snapshot of the instance’s EBS volume(s) and registers the snapshot as an AMI. New instances can be run of this AMI with their starting state almost identical to the original running instance.

For years, I’ve been propagating the belief that a create-image call against a running instance is equivalent to these steps:

  1. stop
  2. register-image
  3. start

However, through experimentation I’ve found that though create-image is similar to the above, it doesn’t have all of the effects that a stop/start has on an instance.

Specifically, when you trigger create-image,

  • the Elastic IP address is not disassociated, even if the instance is not in a VPC,

  • the Internal IP address is preserved, and

  • the ephemeral storage (often on /mnt) is not lost.

I have not tested it, but I suspect that a new billing hour is not started with create-image (as it would be with a stop/start).

So, I am now going to start saying that create-image is equivalent to:

  1. shutdown of the OS without stopping the instance - there is no way to do this in EC2 as a standalone operation
  2. register-image
  3. boot of the OS inside the still running instance - also no way to do this yourself.

or:

create-image is a reboot of the instance, with a register-image API call at the point when the OS is shutdown

As far as I’ve been able to tell, the instance stays in the running state the entire time.

I’ve talked before about the difference between a reboot and a stop/start on EC2.

Note: If you want to create an image (AMI) from your running instance, but can’t afford to have it reboot and be out of service for a few minutes, you can specify the no-reboot option.

There is a small risk of the new AMI having a corrupt file system in the rare event that the snapshot was created while the file system on the boot volume was being modified in an unstable state, but I haven’t heard of anybody actually getting bit by this.

If it is important, test the new AMI before depending on it for future use.

use concurrent AWS command line requests to search the world for your instance, image, volume, snapshot, …

Background

Amazon EC2 and many other AWS services are divided up into various regions across the world. Each region is a separate geographic area and is completely independent of other regions.

Though this is a great architecture for preventing global meltdown, it can occasionally make life more difficult for customers, as we must interact with each region separately.

One example of this is when we have the id for an AMI, instance, or other EC2 resource and want to do something with it but don’t know which region it is in.

This happens on ServerFault when a poster presents a problem with an instance, provides the initial AMI id, but forgets to specify the EC2 region. In order to find and examine the AMI, you need to look in each region to discover where it is.

Performance

You’ll hear a repeating theme when discussing performance in AWS:

To save time, run API requests concurrently.

This principle applies perfectly when performing requests across regions.

Parallelizing requests may seem like it would require an advanced programming language, but since I love using command line programs for simple interactive AWS tasks, I’ll present an easy mechanism for concurrent processing that works in bash.

Example

The following sample code finds an AMI using concurrent aws-cli commands to hit all regions in parallel.

id=ami-25b01138 # example AMI id
type=image # or "instance" or "volume" or "snapshot" or ...

regions=$(aws ec2 describe-regions --output text --query 'Regions[*].RegionName')
for region in $regions; do
    (
     aws ec2 describe-${type}s --region $region --$type-ids $id &>/dev/null && 
         echo "$id is in $region"
    ) &
done 2>/dev/null; wait 2>/dev/null

This results in the following output:

ami-25b01138 is in sa-east-1

By running the queries concurrently against all of the regions, we cut the run time by almost 90% and get our result in a second or two.

Drop this into a script, add code to automatically detect the type of the id, and you’ve got a useful command line tool… which you’re probably going to want to immediately rewrite in Python so there’s not quite so much forking going on.

Ubuntu AMIs

Ubuntu AMIs for EC2:


More Entries

Changing The Default "ubuntu" Username On New EC2 Instances
configure your own ssh username in user-data The official Ubuntu AMIs create a default user with the username ubuntu which is used for the initial ssh access, i.e.: ssh ubuntu@<HOST>…
Default ssh Usernames For Connecting To EC2 Instances
Each AMI publisher on EC2 decides what user (or users) should have ssh access enabled by default and what ssh credentials should allow you to gain access as that user.…
New c3.* Instance Types on Amazon EC2 - Nice!
Worth switching. Amazon shared that the new c3.* instance types have been in high demand on EC2 since they were released. I finally had a minute to take a look…
Query EC2 Account Limits with AWS API
Here’s a useful tip mentioned in one of the sessions at AWS re:Invent this year. There is a little known API call that lets you query some of the EC2…
Using aws-cli --query Option To Simplify Output
My favorite session at AWS re:Invent was James Saryerwinnie’s clear, concise, and informative tour of the aws-cli (command line interface), which according to GitHub logs he is enhancing like crazy.…
Reset S3 Object Timestamp for Bucket Lifecycle Expiration
use aws-cli to extend expiration and restart the delete or archive countdown on objects in an S3 bucket Background S3 buckets allow you to specify lifecycle rules that tell AWS…
Installing aws-cli, the New AWS Command Line Tool
consistent control over more AWS services with aws-cli, a single, powerful command line tool from Amazon Readers of this tech blog know that I am a fan of the power…
Using An AWS CloudFormation Stack To Allow "-" Instead Of "+" In Gmail Email Addresses
Launch a CloudFormation template to set up a stack of AWS resources to fill a simple need: Supporting Gmail addresses with “-” instead of “+” separating the user name from…
New Options In ec2-expire-snapshots v0.11
The ec2-expire-snapshots program can be used to expire EBS snapshots in Amazon EC2 on a regular schedule that you define. It can be used as a companion to ec2-consistent-snapshot or…
Replacing a CloudFront Distribution to "Invalidate" All Objects
I was chatting with Kevin Boyd (aka Beryllium) on the ##aws Freenode IRC channel about the challenge of invalidating a large number of CloudFront objects (35,000) due to a problem…
Email Alerts for AWS Billing Alarms
using CloudWatch and SNS to send yourself email messages when AWS costs accrue past limits you define The Amazon documentation describes how to use the AWS console to monitor your…
Cost of Transitioning S3 Objects to Glacier
how I was surprised by a large AWS charge and how to calculate the break-even point Glacier Archival of S3 Objects Amazon recently introduced a fantastic new feature where S3…
Running Ubuntu on Amazon EC2 in Sydney, Australia
Amazon has announced a new AWS region in Sydney, Australia with the name ap-southeast-2. The official Ubuntu AMI lookup pages (1, 2) don’t seem to be showing the new location…
Save Money by Giving Away Unused Heavy Utilization Reserved Instances
You may be able to save on future EC2 expenses by selling an unused Reserved Instance for less than its true value or even $0.01, provided it is in the…
Installing AWS Command Line Tools from Amazon Downloads
This article describes how to install the old generation of AWS command line tools. For the most part, these have been replaced with the new AWS cli that is…
Convert Running EC2 Instance to EBS-Optimized Instance with Provisioned IOPS EBS Volumes
Amazon just announced two related features for getting super-fast, consistent performance with EBS volumes: (1) Provisioned IOPS EBS volumes, and (2) EBS-Optimized Instances. Starting new instances and EBS volumes with…
Which EC2 Availability Zone is Affected by an Outage?
Did you know that Amazon includes status messages about the health of availability zones in the output of the ec2-describe-availability-zones command, the associated API call, and the AWS console? Right…
Installing AWS Command Line Tools Using Ubuntu Packages
See also: Installing AWS Command Line Tools from Amazon Downloads Here are the steps for installing the AWS command line tools that are currently available as Ubuntu packages. These include:…
Ubuntu Developer Summit, May 2012 (Oakland)
I will be attending the Ubuntu Developer Summit (UDS) next week in Oakland, CA. ┬áThis event brings people from around the world together in one place every six months to…
Uploading Known ssh Host Key in EC2 user-data Script
The ssh protocol uses two different keys to keep you secure: The user ssh key is the one we normally think of. This authenticates us to the remote host, proving…
Seeding Torrents with Amazon S3 and s3cmd on Ubuntu
Amazon Web Services is such a huge, complex service with so many products and features that sometimes very simple but powerful features fall through the cracks when you’re reading the…
CloudCamp
There are a number of CloudCamp events coming up in cities around the world. These are free events, organized around the various concepts, technologies, and services that fall under the…
Use the Same Architecture (64-bit) on All EC2 Instance Types
A few hours ago, Amazon AWS announced that all EC2 instance types can now run 64-bit AMIs. Though t1.micro, m1.small, and c1.medium will continue to also support 32-bit AMIs, it…
ec2-consistent-snapshot on GitHub and v0.43 Released
The source for ec2-conssitent-snapshot has historically been available here: ec2-consistent-snapshot on Launchpad.net using Bazaar For your convenience, it is now also available here: ec2-consistent-snapshot on GitHub using Git You are…
You Should Use EBS Boot Instances on Amazon EC2
EBS boot vs. instance-store If you are just getting started with Amazon EC2, then use EBS boot instances and stop reading this article. Forget that you ever heard about instance-store…
Retrieve Public ssh Key From EC2
A serverfault poster had a problem that I thought was a cool challenge. I had so much fun coming up with this answer, I figured I’d share it here as…
Running EC2 Instances on a Recurring Schedule with Auto Scaling
Do you want to run short jobs on Amazon EC2 on a recurring schedule, but don’t want to pay for an instance running all the time? Would you like to…
AWS Virtual MFA and the Google Authenticator for Android
Amazon just announced that the AWS MFA (multi-factor authentication) now supports virtual or software MFA devices in addition to the physical hardware MFA devices like the one that’s been taking…
Updated EBS boot AMIs for Ubuntu 8.04 Hardy on Amazon EC2 (2011-10-06)
Canonical has released updated instance-store AMIs for Ubuntu 8.04 LTS Hardy on Amazon EC2. Read Ben Howard’s announcement on the ec2ubuntu Google group. I have released corresponding EBS boot AMIs…
New Release of Alestic Git Server
New AMIs have been released for the Alestic Git Server. Major upgrade points include: Base operating system upgraded to Ubuntu 11.04 Natty Git upgraded to version 1.7.4.1 gitolite upgraded to…