Replacing EC2 On-Demand Instances With New Spot Instances

with an SMS text warning two minutes before interruption, using CloudWatch Events Rules And SNS

The EC2 Spot instance marketplace has had a number of enhancements in the last couple months that have made it more attractive for more use cases. Improvements include:

  • You can run an instance like you normally do for on-demand instances and add one option to make it a Spot instance! The instance starts up immediately if your bid price is sufficient given spot market conditions, and will generally cost much less than on-demand.

  • Spot price volatility has been significantly reduced. Spot prices are now based on long-term trends in supply and demand instead of hour-to-hour bidding wars. This means that instances are much less likely to be interrupted because of short-term spikes in Spot prices, leading to much longer running instances on average.

  • You no longer have to specify a bid price. The Spot Request will default to the instance type’s on-demand price in that region. This saves looking up pricing information and is a reasonable default if you are using Spot to save money over on-demand.

  • CloudWatch Events can now send a two-minute warning before a Spot instance is interrupted, through email, text, AWS Lambda, and more.

Putting these all together makes it easy to take instances you formerly ran on-demand and add an option to turn them into new Spot instances. They are much less likely to be interrupted than with the old spot market, and you can save a little to a lot in hourly costs, depending on the instance type, region, and availability zone.

Plus, you can get a warning a couple minutes before the instance is interrupted, giving you a chance to save work or launch an alternative. This warning could be handled by code (e.g., AWS Lambda) but this article is going to show how to get the warning by email and by SMS text message to your phone.


You should not run a Spot instance unless you can withstand having the instance stopped for a while from time to time.

Make sure you can easily start a replacement instance if the Spot instance is stopped or terminated. This probably includes regularly storing important data outside of the Spot instance (e.g., S3).

You cannot currently re-start a stopped or hibernated Spot instance manually, though the Spot market may re-start it automatically if you configured it with interruption behavior “stop” (or “hibernate”) and if the Spot price comes back down below your max bid.

If you can live with these conditions and risks, then perhaps give this approach a try.

Start An EC2 Instance With A Spot Request

An aws-cli command to launch an EC2 instance can be turned into a Spot Request by adding a single parameter: --instance-market-options ...

The option parameters we will use do not specify a max bid, so it defaults to the on-demand price for the instance type in the region. We specify “stop” and “persistent” so that the instance will be restarted automatically if it is interrupted temporarily by a rising Spot market price that then comes back down.

Adjust the following options to suite. The important part for this example is the instance market options.

ami_id=ami-c62eaabe # Ubuntu 16.04 LTS Xenial HVM EBS us-west-2 (as of post date)
instance_name="Temporary Demo $(date +'%Y-%m-%d %H:%M')"

instance_id=$(aws ec2 run-instances \
  --region "$region" \
  --instance-type "$instance_type" \
  --image-id "$ami_id" \
  --instance-market-options "$instance_market_options" \
  --tag-specifications \
    'ResourceType=instance,Tags=[{Key="Name",Value="'"$instance_name"'"}]' \
  --output text \
  --query 'Instances[*].InstanceId')
echo instance_id=$instance_id

Other options can be added as desired. For example, specify an ssh key for the instance with an option like:

  --key $USER

and a user-data script with:

  --user-data file:///path/to/

If there is capacity, the instance will launch immediately and be available quickly. It can be used like any other instance that is launched outside of the Spot market. However, this instance has the risk of being stopped, so make sure you are prepared for this.

The next section presents a way to get the early warning before the instance is interrupted.

Listing AWS Accounts in an AWS Organization

using aws-cli with a human-oriented output format

If you keep creating AWS accounts for every project, as I do, then you will build up a large inventory of accounts. Occasionally, you might want to get a list of all of the accounts for easy review.

The following simple aws-cli command pipeline:

  • filters to just the active (non-deleted) AWS accounts
  • sorts by account create/join time

and outputs the following values in nicely aligned columns:

  • AWS account id
  • account email address
  • account name

Here’s the command pipeline (contains some bash-ism):

aws organizations list-accounts \
  --output text \
  --query 'Accounts[?Status==`ACTIVE`][Status,JoinedTimestamp,Id,Email,Name]' |
  sort |
  cut -f2- |
  column -t -n -e -s$'\cI'

Here is a sample of what the output might look like:

Streaming AWS DeepLens Video Over SSH

instead of connecting to the DeepLens with HDMI micro cable, monitor, keyboard, mouse

Credit for this excellent idea goes to Ernie Kim. Thank you!

Instructions without ssh

The standard AWS DeepLens instructions recommend connecting the device to a monitor, keyboard, and mouse. The instructions provide information on how to view the video streams in this mode:

If you are connected to the DeepLens using a monitor, you can view the unprocessed device stream (raw camera video before being processed by the model) using this command on the DeepLens device:

mplayer –demuxer /opt/awscam/out/ch1_out.h264

If you are connected to the DeepLens using a monitor, you can view the project stream (video after being processed by the model on the DeepLens) using this command on the DeepLens device:

mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 /tmp/ssd_results.mjpeg

Instructions with ssh

You can also view the DeepLens video streams over ssh, without having a monitor connected to the device. To make this possible, you need to enable ssh access on your DeepLens. This is available as a checkbox option in the initial setup of the device. I’m working to get instructions on how to enable ssh access afterwards and will update once this is available.

To view the video streams over ssh, we take the same mplayer command options above and the same source stream files, but send the stream over ssh, and feed the result to the stdin of an mplayer process running on the local system, presumably a laptop.

All of the following commands are run on your local laptop (not on the DeepLens device).

You need to know the IP address of your DeepLens device on your local network:

ip_address=[IP ADDRESS OF DeepLens]

You will need to install the mplayer software on your local laptop. This varies with your OS, but for Ubuntu:

sudo apt-get install mplayer

You can view the unprocessed device stream (raw camera video before being processed by the model) over ssh using the command:

ssh aws_cam@$ip_address cat /opt/awscam/out/ch1_out.h264 |
  mplayer –demuxer -

You can view the project stream (video after being processed by the model on the DeepLens) over ssh with the command:

ssh aws_cam@$ip_address cat /tmp/ssd_results.mjpeg |
  mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 -

Benefits of using ssh to view the video streams include:

  • You don’t need to have an extra monitor, keyboard, mouse, and micro-HDMI adapter cable.

  • You don’t need to locate the DeepLens close to a monitor, keyboard, mouse.

  • You don’t need to be physically close to the DeepLens when you are viewing the video streams.

For those of us sitting on a couch with a laptop, a DeepLens across the room, and no extra micro-HDMI cable, this is great news!


To protect the security of your sensitive DeepLens video feeds:

Creating AWS Accounts From The Command Line With AWS Organizations

Copy+paste some aws-cli commands to add a new AWS account to your AWS Organization

The AWS Organizations service was introduced at AWS re:Invent 2016. The service has some advanced features, but at a minimum, it is a wonderful way to create new accounts easily, with:

  • all accounts under the consolidated billing

  • no need to enter a credit card

  • no need to enter a phone number, answer a call, and key in a confirmation that you are human

  • automatic, pre-defined cross-account IAM role assumable by master account IAM users

  • no need to pick and securely store a password

I create new AWS accounts at the slightest provocation. They don’t cost anything as long as you aren’t using resources, and they are a nice way to keep unrelated projects separate for security, cost monitoring, and just keeping track of what resources belong where.

I will create an AWS account for a new, independent, side project. I will create an account for a weekend hackathon to keep that mess away from anything else I care about. I will even create an account just to test a series of AWS commands for a blog post, making sure that I am not depending on some earlier configurations that might not be in readers’ accounts.

By copying and pasting commands based on the examples in this article, I can create and start using a new AWS account in minutes.

Before You Start

Rewriting In Python 3.6 On AWS Lambda With Chalice

If you are using and depending on the service, please be aware that the entire code base will be swapped out and replaced with new code before the end of May, 2017.

Ideally, consumers of the API will notice no changes, but if you are concerned, you can test out the new implementation using this temporary endpoint:

For example:


This new endpoint uses the same timer database, so all timers can be queried and set using either endpoint.

At some point before the end of May, the new code will be activated by the standard endpoint.

AWS Community Day San Francisco, June 15, 2017

register today for this free conference with content organized and presented by AWS community experts!

The AWS user community in the Bay Area and (US) West Coast is getting together in San Francisco for a full day of technical content, food, drink, and mixing with other users in the AWS community.

AWS Community Day San Francisco

At AWS Community Day San Francisco, the content is selected and planned by leaders in the AWS community, and all of the speakers are AWS experts from the West Coast community of AWS users, including some AWS User Group leaders and AWS Community Heroes.

Amazon is generously footing the bill for the event space, food, multi-media management, event site hosting, registration, and all the little details that make a big event like this go smoothly, but the content is organized and presented by AWS community leaders and experts, instead of Amazon employees.

“When we learn from the community we learn from others like us. We have a shared perspective as users of AWS. AWS Community Day will offer us a unique opportunity that we cannot get at AWS re:Invent or AWS Summit.”
John Varghese, Leader, Bay Area AWS User Group

This free event is being held at the Marriott Marquis San Francisco on Thursday, June 15, 2017.

Breakfast, lunch, snacks, and happy hour are provided to keep the community fueled for learning and networking. Thanks, Amazon!

“I’m excited to spend a day learning and sharing with the rest of the AWS community. The sessions will be a great mix of practical tips about the services we use constantly, and showcases of new technologies that we aren’t so familiar with. It’s great to hear about best practices from Amazon, but it’ll be even better to learn from our peers’ real world results.”
Ryan Park, Engineering Manager, Slack

There are eight sessions planned in two tracks, with plenty of valuable information for all types of AWS users, and lots of opportunities to meet others in the community and chat about your AWS experiences. The current agenda includes:

Incompatible: Static S3 Website With CloudFront Forwarding All Headers

a small lesson learned in setting up a static web site with S3 and CloudFront

I created a static web site hosted in an S3 bucket named (not the real name) and enabled accessing it as a website. I wanted delivery to be fast to everybody around the world, so I created a CloudFront distribution in front of the S3 bucket.

I wanted S3 to automatically add “index.html” to URLs ending in a slash (CloudFront can’t do this), so I configured the CloudFront distribution to access the S3 bucket as a web site using as the origin server.

Before sending all of the traffic to the new setup, I wanted to test it, so I added to the list of CNAMEs in the CloudFront distribution.

After setting up Route53 so that DNS lookups for would resolve to the new CloudFront endpoint, I loaded it in my browser and got the following error:

How Much Does It Cost To Run A Serverless API on AWS?

Serving 2.1 million API requests for $11

Folks tend to be curious about how much real projects cost to run on AWS, so here’s a real example with breakdowns by AWS service and feature.

This article walks through the AWS invoice for charges accrued in November 2016 by the API service which runs in the us-east-1 (Northern Virginia) region and uses the following AWS services:

  • API Gateway
  • AWS Lambda
  • DynamoDB
  • Route 53
  • CloudFront
  • SNS (Simple Notification Service)
  • CloudWatch Logs
  • CloudWatch Metrics
  • CloudTrail
  • S3
  • Network data transfer
  • CloudWatch Alarms

During this month, service processed over 2 million API requests. Every request ran an AWS Lambda function that read from and/or wrote to a DynamoDB table.

This AWS account is older than 12 months, so any first year free tier specials are no longer applicable.

Total Cost Overview

At the very top of the AWS invoice, we can see that the total AWS charges for the month of November add up to $11.12. This is the total bill for processing the 2.1 million API requests and all of the infrastructure necessary to support them.

Invoice: Summary

Amazon Polly Text To Speech With aws-cli and Twilio

Today, Amazon announced a new web service named Amazon Polly, which converts text to speech in a number of languages and voices.

Polly is trivial to use for basic text to speech, even from the command line. Polly also has features that allow for more advanced control of the resulting speech including the use of SSML (Speech Synthesis Markup Language). SSML is familiar to folks already developing Alexa Skills for the Amazon Echo family.

This article describes some simple fooling around I did with this new service.

Deliver Amazon Polly Speech By Phone Call With Twilio

I’ve been meaning to develop some voice applications with Twilio, so I took this opportunity to test Twilio phone calls with speech generated by Amazon Polly. The result sounds a lot better than the default Twilio-generated speech.

The basic approach is:

  1. Generate the speech audio using Amazon Polly.

  2. Upload the resulting audio file to S3.

  3. Trigger a phone call with Twilio, pointing it at the audio file to play once the call is connected.

Here are some sample commands to accomplish this:

1- Generate Speech Audio With Amazon Polly

Watching AWS CloudFormation Stack Status

live display of current event status for each stack resource

Would you like to be able to watch the progress of your new CloudFormation stack resources like this? (press play)

That’s what the output of the new aws-cloudformation-stack-status command looks like when I launch a new AWS Git-backed Static Website CloudFormation stack.

It shows me in real time which resources have completed, which are still in progress, and which, if any, have experienced problems.