The aws-cli documentation and command line help text have not been updated yet to include the syntax for subscribing an AWS Lambda function to an SNS topic, but it does work!

Here’s the format:

aws sns subscribe   --topic-arn arn:aws:sns:REGION:ACCOUNT:SNSTOPIC   --protocol lambda   --notification-endpoint arn:aws:lambda:REGION:ACCOUNT:function:LAMBDAFUNCTION

where REGION, ACCOUNT, SNSTOPIC, and LAMBDAFUNCTION are substituted with appropriate values for your account.

For example:

aws sns subscribe --topic-arn arn:aws:sns:us-east-1:012345678901:mytopic   --protocol lambda   --notification-endpoint arn:aws:lambda:us-east-1:012345678901:function:myfunction

This returns an SNS subscription ARN like so:

{
    "SubscriptionArn": "arn:aws:sns:us-east-1:012345678901:mytopic:2ced0134-e247-11e4-9da9-22000b5b84fe"
}

You can unsubscribe with a command like:

aws sns unsubscribe   --subscription-arn arn:aws:sns:us-east-1:012345678901:mytopic:2ced0134-e247-11e4-9da9-22000b5b84fe

where the subscription ARN is the one returned from the subscribe command.

I’m using the latest version of aws-cli as of 2015-04-15 on the GitHub “develop” branch, which is version 1.7.22.

Today, Amazon announced that AWS Lambda functions can be subscribed to Amazon SNS topics.

This means that any message posted to an SNS topic can trigger the execution of custom code you have written, but you don’t have to maintain any infrastructure to keep that code available to listen for those events and you don’t have to pay for any infrastructure when the code is not being run.

This is, in my opinion, the first time that Amazon can truly say that AWS Lambda is event-driven, as we now have a central, independent, event management system (SNS) where any authorized entity can trigger the event (post a message to a topic) and any authorized AWS Lambda function can listen for the event, and neither has to know about the other.

Making this instantly useful is the fact that there already are a number of AWS services and events that can post messages to Amazon SNS. This means there are a lot of application ideas that are ready to be implemented with nothing but a few commands to set up the SNS topic, and some snippets of nodejs code to upload as an AWS Lambda function.

Unfortunately, I was unable to find a comprehensive list of all the AWS services and events that can post messages to Amazon SNS (Simple Notification Service).

I’d like to try an experiment and ask the readers of this blog to submit pointers to AWS and other services which can be configured to post events to Amazon SNS. I will collect the list and update this blog post.

Here’s the list so far:

You can either submit your suggestions as comments on this blog post, or tweet the pointer mentioning @esh

Thanks for contributing ideas:

[2015-04-13: Updated with input from comments and Twitter]

AWS Lambda functions are run inside of an Amazon Linux environment (presumably a container of some sort). Sequential calls to the same Lambda function could hit the same or different instantiations of the environment.

If you hit the same copy (I don’t want to say “instance”) of the Lambda function, then stuff you left in the environment from a previous run might still be available.

This could be useful (think caching) or hurtful (if your code incorrectly expects a fresh start every run).

Here’s an example using lambdash, a hack I wrote that sends shell commands to a Lambda function to be run in the AWS Lambda environment, with stdout/stderr being sent back through S3 and displayed locally.

$ lambdash 'echo a $(date) >> /tmp/run.log; cat /tmp/run.log'
a Tue Dec 9 13:54:50 PST 2014

$ lambdash 'echo b $(date) >> /tmp/run.log; cat /tmp/run.log'
a Tue Dec 9 13:54:50 PST 2014
b Tue Dec 9 13:55:00 PST 2014

$ lambdash 'echo c $(date) >> /tmp/run.log; cat /tmp/run.log'
a Tue Dec 9 13:54:50 PST 2014
b Tue Dec 9 13:55:00 PST 2014
c Tue Dec 9 13:55:20 PST 2014

As you can see in this example, the file in /tmp contains content from previous runs.

These tests are being run in AWS Lambda Preview, and should not be depended on for long term or production plans. Amazon could change how AWS Lambda works at any time for any reason, especially when the behaviors are not documented as part of the interface. For example, Amazon could decide to clear out writable file system areas like /tmp after each run.

If you want to have a dependable storage that can be shared among multiple copies of an AWS Lambda function, consider using standard AWS services like DynamoDB, RDS, ElastiCache, S3, etc.

understand the commitment you are making to pay for the entire 1-3 years

Amazon just announced a change in the way that Reserved Instances are sold. Instead of selling the old Reserved Instance types:

  • Light Utilization
  • Medium Utilization
  • Heavy Utilization

EC2 is now selling these new Reserved Instance types:

  • No Upfront
  • Partial Upfront
  • All Upfront

Despite the fact that they are still called “Reserved Instances” and that there are three plans which sound like increasing commitment, the are not equivalent and do not map 1-1 old to new. In fact the new Reserved Instances are not even increasing commitment.

You should forget what you knew about Reserved Instances and read all the fine print before making any further Reserved Instance purchases.

One of the big differences between the old and the new is that you are always committing to spend the entire 1-3 years of cost even if you are not running a matching instance during part of that time. This text is buried in the fine print in a “**” footnote towards the bottom of the pricing page:

When you purchase a Reserved Instance, you are billed for every hour during the entire Reserved Instance term that you select, regardless of whether the instance is running or not.

As I pointed out in the 2012 article titled Save Money by Giving Away Unused Heavy Utilization Reserved Instances, this was also true of Heavy Utilization Reserved Instances, but with the old Light and Medium Utilization Reserved Instances you stopped spending money by stopping or terminating your instance.

Let’s walk through an example with the new EC2 Reserved Instance prices. Say you expect to run a c3.2xlarge for a year. Here are some options at the prices when this article was published:

Pricing Option Cost Structure Yearly Cost Savings over On Demand
On Demand $0.420/hour $3,679.20/year  
No Upfront RI $213.16/month $2,557.92/year 30%
Partial Upfront RI $1,304/once + $75.92/month $2,215.04/year 40%
All Upfront RI $2,170/once $2,170.00/year 41%

There’s a big jump in yearly savings from On Demand to the Reserved Instances, and then there is an increasing (but sometimes small) savings for the more of the total cost you pay up front. The percentage savings varies by instance type, so read up on the pricing page.

The big difference is that you can stop paying the On Demand price if you decide you don’t need that instance running, or you figure out that the application can work better on a larger (or smaller) instance type.

With all new Reserved Instance pricing options, you commit to paying the entire year’s cost. The only difference is how much of it you pay up front and how much you pay over the next 12 months.

If you purchase a Reserved Instance and decide you don’t need it after a while, you may be able to sell it (perhaps at some loss) on the Reserved Instance Marketplace, but your odds of completing a sale and the money you get back from that are not guaranteed.

A fantastic new and oft-requested AWS feature was released during AWS re:Invent, but has gotten lost in all the hype about AWS Lambda functions being triggered when objects are added to S3 buckets. AWS Lambda is currently in limited Preview mode and you have to request access, but this related feature is already available and ready to use.

I’m talking about automatic S3 bucket notifications to SNS topics and SQS queues when new S3 objects are added.

Unlike AWS Lambda, with S3 bucket notifications you do need to maintain the infrastructure to run your code, but you’re already running EC2 instances for application servers and job processing, so this will fit right in.

To detect and respond to S3 object creation in the past, you needed to either have every process that uploaded to S3 subsequently trigger your back end code in some way, or you needed to poll the S3 bucket to see if new objects had been added. The former adds code complexity and tight coupling dependencies. The latter can be costly in performance and latency, especially as the number of objects in the bucket grows.

With the new S3 bucket notification configuration options, the addition of an object to a bucket can send a message to an SNS topic or to an SQS queue, triggering your code quickly and effortlessly.

Here’s a working example of how to set up and use S3 bucket notification configurations to send messages to SNS on object creation and update.

Setup

Replace parameter values with your preferred names.

region=us-east-1
s3_bucket_name=BUCKETNAMEHERE
email_address=YOURADDRESS@EXAMPLE.COM
sns_topic_name=s3-object-created-$(echo $s3_bucket_name | tr '.' '-')
sqs_queue_name=$sns_topic_name

Create the test bucket.

aws s3 mb   --region "$region"   s3://$s3_bucket_name

Create an SNS topic.

sns_topic_arn=$(aws sns create-topic   --region "$region"   --name "$sns_topic_name"   --output text   --query 'TopicArn')
echo sns_topic_arn=$sns_topic_arn

Allow S3 to publish to the SNS topic for activity in the specific S3 bucket.

aws sns set-topic-attributes   --topic-arn "$sns_topic_arn"   --attribute-name Policy   --attribute-value '{
      "Version": "2008-10-17",
      "Id": "s3-publish-to-sns",
      "Statement": [{
              "Effect": "Allow",
              "Principal": { "AWS" : "*" },
              "Action": [ "SNS:Publish" ],
              "Resource": "'$sns_topic_arn'",
              "Condition": {
                  "ArnLike": {
                      "aws:SourceArn": "arn:aws:s3:*:*:'$s3_bucket_name'"
                  }
              }
      }]
  }'

Add a notification to the S3 bucket so that it sends messages to the SNS topic when objects are created (or updated).

aws s3api put-bucket-notification   --region "$region"   --bucket "$s3_bucket_name"   --notification-configuration '{
    "TopicConfiguration": {
      "Events": [ "s3:ObjectCreated:*" ],
      "Topic": "'$sns_topic_arn'"
    }
  }'

Test

You now have an S3 bucket that is going to post a message to an SNS topic when objects are added. Let’s give it a try by connecting an email address listener to the SNS topic.

Subscribe an email address to the SNS topic.

aws sns subscribe   --topic-arn "$sns_topic_arn"   --protocol email   --notification-endpoint "$email_address"

IMPORTANT! Go to your email inbox now and click the link to confirm that you want to subscribe that email address to the SNS topic.

Upload one or more files to the S3 bucket to trigger the SNS topic messages.

aws s3 cp [SOMEFILE] s3://$s3_bucket_name/testfile-01

Check your email for the notification emails in JSON format, containing attributes like:

{ "Records":[  
    { "eventTime":"2014-11-27T00:57:44.387Z",
      "eventName":"ObjectCreated:Put", ...
      "s3":{
        "bucket":{ "name":"BUCKETNAMEHERE", ... },
        "object":{ "key":"testfile-01", "size":5195, ... }
}}]}

Notification to SQS

The above example connects an SNS topic to the S3 bucket notification configuration. Amazon also supports having the bucket notifications go directly to an SQS queue, but I do not recommend it.

Instead, send the S3 bucket notification to SNS and have SNS forward it to SQS. This way, you can easily add other listeners to the SNS topic as desired. You can even have multiple SQS queues subscribed, which is not possible when using a direct notification configuration.

Here are some sample commands that create an SQS queue and connect it to the SNS topic.

Create the SQS queue and get the ARN (Amazon Resource Name). Some APIs need the SQS URL and some need the SQS ARN. I don’t know why.

sqs_queue_url=$(aws sqs create-queue   --queue-name $sqs_queue_name   --attributes 'ReceiveMessageWaitTimeSeconds=20,VisibilityTimeout=300'    --output text   --query 'QueueUrl')
echo sqs_queue_url=$sqs_queue_url

sqs_queue_arn=$(aws sqs get-queue-attributes   --queue-url "$sqs_queue_url"   --attribute-names QueueArn   --output text   --query 'Attributes.QueueArn')
echo sqs_queue_arn=$sqs_queue_arn

Give the SNS topic permission to post to the SQS queue.

sqs_policy='{
    "Version":"2012-10-17",
    "Statement":[
      {
        "Effect":"Allow",
        "Principal": { "AWS": "*" },
        "Action":"sqs:SendMessage",
        "Resource":"'$sqs_queue_arn'",
        "Condition":{
          "ArnEquals":{
            "aws:SourceArn":"'$sns_topic_arn'"
          }
        }
      }
    ]
  }'
sqs_policy_escaped=$(echo $sqs_policy | perl -pe 's/"/\\"/g')
sqs_attributes='{"Policy":"'$sqs_policy_escaped'"}'
aws sqs set-queue-attributes   --queue-url "$sqs_queue_url"   --attributes "$sqs_attributes"

Subscribe the SQS queue to the SNS topic.

aws sns subscribe   --topic-arn "$sns_topic_arn"   --protocol sqs   --notification-endpoint "$sqs_queue_arn"

You can upload another test file to the S3 bucket, which will now generate both the email and a message to the SQS queue.

aws s3 cp [SOMEFILE] s3://$s3_bucket_name/testfile-02

Read the S3 bucket notification message from the SQS queue:

aws sqs receive-message   --queue-url $sqs_queue_url

The output of that command is not quite human readable as it has quoted JSON inside quoted JSON inside JSON, but your queue processing software should be able to decode it and take appropriate actions.

You can tell the SQS queue that you have “processed” the message by grabbing the “ReceiptHandle” value from the above output and deleting the message.

sqs_receipt_handle=...
aws sqs delete-message   --queue-url "$sqs_queue_url"   --receipt-handle "$sqs_receipt_handle"

You only have a limited amount of time to process the message and delete it before SQS tosses it back in the queue for somebody else to process. This test queue gives you 5 minutes (VisibilityTimeout=300). If you go past this timeout, simply read the message from the queue and try again.

Cleanup

Delete the SQS queue:

aws sqs delete-queue   --queue-url "$sqs_queue_url"

Delete the SNS topic (and all subscriptions).

aws sns delete-topic   --region "$region"   --topic-arn "$sns_topic_arn"

Delete test objects in the bucket:

aws s3 rm s3://$s3_bucket_name/testfile-01
aws s3 rm s3://$s3_bucket_name/testfile-02

Remove the S3 bucket notification configuration:

aws s3api put-bucket-notification   --region "$region"   --bucket "$s3_bucket   --notification-configuration '{}'

Delete the bucket, but only if it was created for this test!

aws s3 rb s3://$s3_bucket_name

History / Future

If the concept of an S3 bucket notification sounds a bit familiar, it’s because AWS S3 has had it for years, but the only supported event type was “s3:ReducedRedundancyLostObject”, triggered when S3 lost an RRS object. Given the way that this feature was designed, we all assumed that Amazon would eventually add more useful events like “S3 object created”, which indeed they released a couple weeks ago.

I would continue to assume/hope that Amazon will eventually support an “S3 object deleted” event because it just makes too much sense for applications that need to keep track of the keys in a bucket.

[Update 2015-04-06: Add code to remove S3 bucket notification, which Amazon just added to aws-cli in release 18]

multiply the speed of compute-intensive Lambda functions without (much) increase in cost

Given:

  • AWS Lambda duration charges are proportional to the requested memory.

  • The CPU power, network, and disk are proportional to the requested memory.

One could conclude that the charges are proportional to the CPU power available to the Lambda function. If the function completion time is inversely proportional to the CPU power allocated (not entirely true), then the cost remains roughly fixed as you dial up power to make it faster.

If your Lambda function is primarily CPU bound and takes at least several hundred ms to execute, then you may find that you can simply allocate more CPU by allocating more memory, and get the same functionality completed in a shorter time period for about the same cost.

For example, if you allocate 128 MB of memory and your Lambda function takes 10 seconds to run, then you might be able to allocate 640 MB and see it complete in about 2 seconds.

At current AWS Lambda pricing, both of these would cost about $0.02 per thousand invocations, but the second one completes five times faster.

Things that would cause the higher memory/CPU option to cost more in total include:

  • Time chunks are rounded up to the nearest 100 ms. If your Lambda function runs near or under that in less memory, then increasing the CPU allocated will make it return faster, but the rounding up will cause the resulting cost to be more expensive.

  • Doubling the CPU allocated to a Lambda function does not necessarily cut the run time in half. The code might be accessing external resources (e.g., calling S3 APIs) or interacting with disk. If you double the requested CPU, then those fixed time actions will end up costing twice as much.

If you have a slow Lambda function, and it seems that most of its time is probably spent in CPU activities, then it might be worth testing an increase in requested memory to see if you can get it to complete much faster without increasing the cost by much.

I’d love to hear what practical test results people find when comparing different memory/CPU allocation values for the same Lambda function.

In the AWS Lambda Shell Hack article, I present a crude hack that lets me run shell commands in the AWS Lambda environment to explore what might be available to Lambda functions running there.

I’ve added a wrapper that lets me type commands on my laptop and see the output of the command run in the Lambda function. This is not production quality software, but you can take a look at it in the alestic/lambdash GitHub repo.

For the curious, here are some results. Please note that this is running on a preview and is in no way a guaranteed part of the environment of a Lambda function. Amazon could change any of it at any time, so don’t build production code using this information.

The version of Amazon Linux:

$ lambdash cat /etc/issue
Amazon Linux AMI release 2014.03
Kernel \r on an \m

The kernel version:

$ lambdash uname -a
Linux ip-10-0-168-157 3.14.19-17.43.amzn1.x86_64 #1 SMP Wed Sep 17 22:14:52 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

The working directory of the Lambda function:

$ lambdash pwd
/var/task

which contains the unzipped contents of the Lambda function I uploaded:

$ lambdash ls -l
total 12
-rw-rw-r-- 1 slicer 497 5195 Nov 18 05:52 lambdash.js
drwxrwxr-x 5 slicer 497 4096 Nov 18 05:52 node_modules

The user running the Lambda function:

$ lambdash id
uid=495(sbx_user1052) gid=494 groups=494

which is one of one hundred sbx_userNNNN users in /etc/passwd. “sbx_user” presumably stands for “sandbox user”.

The environment variables (in a shell subprocess). This appears to be how AWS Lambda is passing the AWS credentials to the Lambda function.

$ lambdash env
 AWS_SESSION_TOKEN=[ELIDED]
LAMBDA_TASK_ROOT=/var/task
LAMBDA_CONSOLE_SOCKET=14
PATH=/usr/local/bin:/usr/bin:/bin
PWD=/var/task
AWS_SECRET_ACCESS_KEY=[ELIDED]
NODE_PATH=/var/runtime:/var/task:/var/runtime/node_modules
AWS_ACCESS_KEY_ID=[ELIDED]
SHLVL=1
LAMBDA_CONTROL_SOCKET=11
_=/usr/bin/env

The versions of various pre-installed software:

$ lambdash perl -v
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
[...]

$ lambdash python --version
Python 2.6.9

$ lambdash node -v
v0.10.32

Running processes:

$ lambdash ps axu
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
493          1  0.2  0.7 1035300 27080 ?       Ssl  14:26   0:00 node --max-old-space-size=0 --max-new-space-size=0 --max-executable-size=0 /var/runtime/node_modules/.bin/awslambda
493         13  0.0  0.0  13444  1084 ?        R    14:29   0:00 ps axu

The entire file system: 2.5 MB download

 $ lambdash ls -laiR /
 [click link above to download]

Kernel ring buffer: 34K download

$ lambdash dmesg
[click link above to download]

CPU info:

$ lambdash cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model       : 62
model name  : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
stepping    : 4
microcode   : 0x416
cpu MHz     : 2800.110
cache size  : 25600 KB
physical id : 0
siblings    : 2
core id     : 0
cpu cores   : 1
apicid      : 0
initial apicid  : 0
fpu     : yes
fpu_exception   : yes
cpuid level : 13
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms
bogomips    : 5600.22
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
[...]

Installed nodejs modules:

$ dirs=$(lambdash 'echo $NODE_PATH' | tr ':' '\n' | sort)
$ echo $dirs
/var/runtime /var/runtime/node_modules /var/task

$ lambdash 'for dir in '$dirs'; do echo $dir; ls -1 $dir; echo; done'
/var/runtime
node_modules

/var/runtime/node_modules
aws-sdk
awslambda
dynamodb-doc
imagemagick

/var/task # Uploaded in Lambda function ZIP file
lambdash.js
node_modules

[Update 2013-12-03]

We’re probably not on a bare EC2 instance. The standard EC2 instance metadata service is not accessible through HTTP:

$ lambdash curl -sS http://169.254.169.254:8000/latest/meta-data/instance-type
curl: (7) Failed to connect to 169.254.169.254 port 8000: Connection refused

Browsing the AWS Lambda environment source code turns up some nice hints about where the product might be heading. I won’t paste the copyrighted code here, but you can download into an “awslambda” subdirectory with:

$ lambdash 'cd /var/runtime/node_modules;tar c awslambda' | tar xv

[Update 2013-12-11]

There’s a half gig of writable disk space available under /tmp (when run with 256MB of RAM. Does this scale up with memory?)

$ lambdash 'df -h 2>/dev/null'
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       30G  1.9G   28G   7% /
devtmpfs         30G  1.9G   28G   7% /dev
/dev/xvda1       30G  1.9G   28G   7% /
/dev/loop0      526M  832K  514M   1% /tmp

Anything else you’d like to see? Suggest commands in the comments on this article.

lambdash: AWS Lambda Shell Hack

| 0 Comments

I spent the weekend learning just enough JavaScript and nodejs to hack together a Lambda function that runs arbitrary shell commands in the AWS Lambda environment.

This hack allows you to explore the current file system, learn what versions of Perl and Python are available, and discover what packages might be installed.

If you’re interested in seeing the results, then read following article which uses this AWS Lambda shell hack to examine the inside of the AWS Lambda run time environment.

Exploring The AWS Lambda Runtime Environment

Now on to the hack…

Setup

Define the basic parameters.

# Replace with your bucket name
bucket_name=lambdash.alestic.com

function=lambdash
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
log_group_name=/aws/lambda/$function

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role   --role-name "$lambda_execution_role_name"   --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
      }]
    }'   --output text   --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. Log to Cloudwatch and upload files to a specific S3 bucket/location.

aws iam put-role-policy   --role-name "$lambda_execution_role_name"   --policy-name "$lambda_execution_access_policy_name"   --policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Effect": "Allow",
          "Action": [ "logs:*" ],
          "Resource": "arn:aws:logs:*:*:*"
      }, {
          "Effect": "Allow",
          "Action": [ "s3:PutObject" ],
          "Resource": "arn:aws:s3:::'$bucket_name'/'$function'/*"
      }]
  }'

Grab the current Lambda function JavaScript from the Alestic lambdash GitHub repository, create the ZIP file, and upload the new Lambda function.

wget -q -O$function.js   https://raw.githubusercontent.com/alestic/lambdash/master/lambdash.js
npm install async fs tmp
zip -r $function.zip $function.js node_modules
aws lambda upload-function   --function-name "$function"   --function-zip "$function.zip"   --runtime nodejs   --mode event   --handler "$function.handler"   --role "$lambda_execution_role_arn"   --timeout 60   --memory-size 256

Invoke the Lambda function with the desired command and S3 output locations. Adjust the command and repeat as desired.

cat > $function-args.json <<EOM
{
    "command": "ls -laiR /",
    "bucket":  "$bucket_name",
    "stdout":  "$function/stdout.txt",
    "stderr":  "$function/stderr.txt"
}
EOM

aws lambda invoke-async   --function-name "$function"   --invoke-args "$function-args.json"

Look at the Lambda function log output in CloudWatch.

log_stream_names=$(aws logs describe-log-streams   --log-group-name "$log_group_name"   --output text   --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  aws logs get-log-events     --log-group-name "$log_group_name"     --log-stream-name "$log_stream_name"     --output text     --query 'events[*].message'
done | less

Get the command output.

aws s3 cp s3://$bucket_name/$function/stdout.txt .
aws s3 cp s3://$bucket_name/$function/stderr.txt .
less stdout.txt stderr.txt

Clean up

If you are done with this example, you can delete the created resources. Or, you can leave the Lambda function in place ready for future use. After all, you aren’t charged unless you use it.

aws s3 rm s3://$bucket_name/$function/stdout.txt
aws s3 rm s3://$bucket_name/$function/stderr.txt
aws lambda delete-function   --function-name "$function"
aws iam delete-role-policy   --role-name "$lambda_execution_role_name"   --policy-name "$lambda_execution_access_policy_name"
aws iam delete-role   --role-name "$lambda_execution_role_name"
aws logs delete-log-group   --log-group-name "$log_group_name"

Requests

What command output would you like to see in the Lambda environment?

The AWS Lambda Walkthrough 2 uses AWS Lambda to automatically resize images added to one bucket, placing the resulting thumbnails in another bucket. The walkthrough documentation has a mix of aws-cli commands, instructions for hand editing files, and steps requiring the AWS console.

For my personal testing, I converted all of these to command line instructions that can simply be copied and pasted, making them more suitable for adapting into scripts and for eventual automation. I share the results here in case others might find this a faster way to get started with Lambda.

These instructions assume that you have already set up and are using an IAM user / aws-cli profile with admin credentials.

The following is intended as a companion to the Amazon walkthrough documentation, simplifying the execution steps for command line lovers. Read the AWS documentation itself for more details explaining the walkthrough.

Set up

Set up environment variables describing the associated resources:

# Change to your own unique S3 bucket name:
source_bucket=alestic-lambda-example

# Do not change this. Walkthrough code assumes this name
target_bucket=${source_bucket}resized

function=CreateThumbnail
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
lambda_invocation_role_name=lambda-$function-invocation
lambda_invocation_access_policy_name=lambda-$function-invocation-access
log_group_name=/aws/lambda/$function

Install some required software:

sudo apt-get install nodejs nodejs-legacy npm

Step 1.1: Create Buckets and Upload a Sample Object (walkthrough)

Create the buckets:

aws s3 mb s3://$source_bucket
aws s3 mb s3://$target_bucket

Upload a sample photo:

# by Hatalmas: https://www.flickr.com/photos/hatalmas/6094281702
wget -q -OHappyFace.jpg   https://c3.staticflickr.com/7/6209/6094281702_d4ac7290d3_b.jpg

aws s3 cp HappyFace.jpg s3://$source_bucket/

Step 2.1: Create a Lambda Function Deployment Package (walkthrough)

Create the Lambda function nodejs code:

# JavaScript code as listed in walkthrough
wget -q -O $function.js   http://run.alestic.com/lambda/aws-examples/CreateThumbnail.js

Install packages needed by the Lambda function code. Note that this is done under the local directory:

npm install async gm # aws-sdk is not needed

Put all of the required code into a ZIP file, ready for uploading:

zip -r $function.zip $function.js node_modules

Step 2.2: Create an IAM Role for AWS Lambda (walkthrough)

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role   --role-name "$lambda_execution_role_name"   --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }'   --output text   --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. This is slightly tighter than the generic role policy created with the IAM console:

aws iam put-role-policy   --role-name "$lambda_execution_role_name"   --policy-name "$lambda_execution_access_policy_name"   --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": [
          "logs:*"
        ],
        "Resource": "arn:aws:logs:*:*:*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:GetObject"
        ],
        "Resource": "arn:aws:s3:::'$source_bucket'/*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:PutObject"
        ],
        "Resource": "arn:aws:s3:::'$target_bucket'/*"
      }
    ]
  }'

Step 2.3: Upload the Deployment Package and Invoke it Manually (walkthrough)

Upload the Lambda function, specifying the IAM role it should use and other attributes:

# Timeout increased from walkthrough based on experience
aws lambda upload-function   --function-name "$function"   --function-zip "$function.zip"   --role "$lambda_execution_role_arn"   --mode event   --handler "$function.handler"   --timeout 30   --runtime nodejs

Create fake S3 event data to pass to the Lambda function. The key here is the source S3 bucket and key:

cat > $function-data.json <<EOM
{  
   "Records":[  
      {  
         "eventVersion":"2.0",
         "eventSource":"aws:s3",
         "awsRegion":"us-east-1",
         "eventTime":"1970-01-01T00:00:00.000Z",
         "eventName":"ObjectCreated:Put",
         "userIdentity":{  
            "principalId":"AIDAJDPLRKLG7UEXAMPLE"
         },
         "requestParameters":{  
            "sourceIPAddress":"127.0.0.1"
         },
         "responseElements":{  
            "x-amz-request-id":"C3D13FE58DE4C810",
            "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
         },
         "s3":{  
            "s3SchemaVersion":"1.0",
            "configurationId":"testConfigRule",
            "bucket":{  
               "name":"$source_bucket",
               "ownerIdentity":{  
                  "principalId":"A3NL1KOZZKExample"
               },
               "arn":"arn:aws:s3:::$source_bucket"
            },
            "object":{  
               "key":"HappyFace.jpg",
               "size":1024,
               "eTag":"d41d8cd98f00b204e9800998ecf8427e",
               "versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko"
            }
         }
      }
   ]
}
EOM

Invoke the Lambda function, passing in the fake S3 event data:

aws lambda invoke-async   --function-name "$function"   --invoke-args "$function-data.json"

Look in the target bucket for the converted image. It could take a while to show up since the Lambda function is running asynchronously:

aws s3 ls s3://$target_bucket

Look at the Lambda function log output in CloudWatch:

aws logs describe-log-groups   --output text   --query 'logGroups[*].[logGroupName]'

log_stream_names=$(aws logs describe-log-streams   --log-group-name "$log_group_name"   --output text   --query 'logStreams[*].logStreamName')
echo log_stream_names="'$log_stream_names'"
for log_stream_name in $log_stream_names; do
  aws logs get-log-events     --log-group-name "$log_group_name"     --log-stream-name "$log_stream_name"     --output text     --query 'events[*].message'
done | less

Step 3.1: Create an IAM Role for Amazon S3 (walkthrough)

This role may be assumed by S3.

lambda_invocation_role_arn=$(aws iam create-role   --role-name "$lambda_invocation_role_name"   --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "s3.amazonaws.com"
          },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringLike": {
              "sts:ExternalId": "arn:aws:s3:::*"
            }
          }
        }
      ]
    }'   --output text   --query 'Role.Arn'
)
echo lambda_invocation_role_arn=$lambda_invocation_role_arn

S3 may invoke the Lambda function.

aws iam put-role-policy   --role-name "$lambda_invocation_role_name"   --policy-name "$lambda_invocation_access_policy_name"   --policy-document '{
     "Version": "2012-10-17",
     "Statement": [
       {
         "Effect": "Allow",
         "Action": [
           "lambda:InvokeFunction"
         ],
         "Resource": [
           "*"
         ]
       }
     ]
   }'

Step 3.2: Configure a Notification on the Bucket (walkthrough)

Get the Lambda function ARN:

lambda_function_arn=$(aws lambda get-function-configuration   --function-name "$function"   --output text   --query 'FunctionARN'
)
echo lambda_function_arn=$lambda_function_arn

Tell the S3 bucket to invoke the Lambda function when new objects are created (or overwritten):

aws s3api put-bucket-notification   --bucket "$source_bucket"   --notification-configuration '{
    "CloudFunctionConfiguration": {
      "CloudFunction": "'$lambda_function_arn'",
      "InvocationRole": "'$lambda_invocation_role_arn'",
      "Event": "s3:ObjectCreated:*"
    }
  }'

Step 3.3: Test the Setup (walkthrough)

Copy your own jpg and png files into the source bucket:

myimages=...
aws s3 cp $myimages s3://$source_bucket/

Look for the resized images in the target bucket:

aws s3 ls s3://$target_bucket

Check out the environment

These handy commands let you review the related resources in your acccount:

aws lambda list-functions   --output text   --query 'Functions[*].[FunctionName]'

aws lambda get-function   --function-name "$function"

aws iam list-roles   --output text   --query 'Roles[*].[RoleName]'

aws iam get-role   --role-name "$lambda_execution_role_name"   --output json   --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies    --role-name "$lambda_execution_role_name"   --output text   --query 'PolicyNames[*]'

aws iam get-role-policy   --role-name "$lambda_execution_role_name"   --policy-name "$lambda_execution_access_policy_name"   --output json   --query 'PolicyDocument'

aws iam get-role   --role-name "$lambda_invocation_role_name"   --output json   --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies    --role-name "$lambda_invocation_role_name"   --output text   --query 'PolicyNames[*]'

aws iam get-role-policy   --role-name "$lambda_invocation_role_name"   --policy-name "$lambda_invocation_access_policy_name"   --output json   --query 'PolicyDocument'

aws s3api get-bucket-notification   --bucket "$source_bucket"

Clean up

If you are done with the walkthrough, you can delete the created resources:

aws s3 rm s3://$target_bucket/resized-HappyFace.jpg
aws s3 rm s3://$source_bucket/HappyFace.jpg
aws s3 rb s3://$target_bucket/
aws s3 rb s3://$source_bucket/

aws lambda delete-function   --function-name "$function"

aws iam delete-role-policy   --role-name "$lambda_execution_role_name"   --policy-name "$lambda_execution_access_policy_name"

aws iam delete-role   --role-name "$lambda_execution_role_name"

aws iam delete-role-policy   --role-name "$lambda_invocation_role_name"   --policy-name "$lambda_invocation_access_policy_name"

aws iam delete-role   --role-name "$lambda_invocation_role_name"

log_stream_names=$(aws logs describe-log-streams   --log-group-name "$log_group_name"   --output text   --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  echo "deleting log-stream $log_stream_name"
  aws logs delete-log-stream     --log-group-name "$log_group_name"     --log-stream-name "$log_stream_name"
done

aws logs delete-log-group   --log-group-name "$log_group_name"

If you try these instructions, please let me know in the comments where you had trouble or experienced errors.

If you uploaded SSL certificates to Amazon Web Services for ELB (Elastic Load Balancing) or CloudFront (CDN), then you will want to keep an eye on the expiration dates and renew the certificates well before to ensure uninterrupted service.

If you uploaded the SSL certificates yourself, then of course at that time you set an official reminder to make sure that you remembered to renew the certificate. Right?

However, if you inherited an AWS account and want to review your company or client’s configuration, then here’s an easy command to get a list of all SSL certificates in IAM, sorted by expiration date.

aws iam list-server-certificates   --output text   --query 'ServerCertificateMetadataList[*].[Expiration,ServerCertificateName]'   | sort

To get more information on an individual certificate, you might use something like:

certificate_name=...
aws iam get-server-certificate   --server-certificate-name $certificate_name   --output text   --query 'ServerCertificate.CertificateBody' | openssl x509 -text | less

That can let you review information like the DNS name(s) the SSL certificate is good for.

Exercise for the reader: Schedule an automated job that reviews SSL certificate expiration and generates messages to an SNS topic when certificates are near expiration. Subscribe email addresses and other alerting services to the SNS topic.

Read more from Amazon on Managing Server Certificates.

Note: SSL certificates embedded in web server applications running on EC2 instances would have to be checked and updated separately from those stored in AWS.