AWS re:Invent 2018 Experiences

thoughts from a conference veteran who has attended every AWS re:Invent

There are a number of great guides to AWS re:Invent with excellent recommendations on how to prepare, what to bring, and how to get the most out of your time. This is not that. In this article, I am going to focus only on specific things that I recommend every attendee experience at least once while at AWS re:Invent.

You may not want to do these things every day (if available) or even every year when you return to re:Invent, but I recommend arranging your first year schedule to fit as many as you can so you don’t go home missing out.

You are taking a week off and making a long trip to Las Vegas. Don’t leave without having seen some of the impressive, large scale, in-person experiences the Amazon team has organized.

1. Attend a Live AWS re:Invent Keynote

Attending a keynote in person is one of the best ways to get a feel for the excitement, energy, and shear scale of this movement.

You could go to a satellite location and watch the keynote on a screen with fellow attendees, but there’s something special about being in the big room live.

Tip: Get in line early to get a decent seat and enjoy the live DJ. There are giant screens that show the speakers and presentation material so you won’t miss out on content wherever you sit.

Tip: It takes a while for the entire crowd to get out of the keynote space, so don’t schedule a session or important lunch meeting right after.

Tip: Werner Vogel’s keynote is slotted for 2 hours instead of the 2.5 hours for Andy Jassy’s, but I’m not sure if Werner has ever ended a re:Invent keynote on schedule, so sit back and enjoy the information and enthusiasm.

Tip: If new product/service/feature announcements are what excite you, then make sure you hit the Andy Jassy keynote.

Veterans… often watch the streamed keynote on their phone or laptop while getting ready in their hotel room, or eating breakfast at a cafe. But you flew halfway around the world to be here, so you should go the last mile (perhaps literally) to get the live AWS re:Invent keynote experience at least once.

AWS Secrets Manager - Initial Thoughts

I was in the audience when Amazon announced the AWS Secrets Manager at the AWS Summit San Francisco. My first thought was that we already have a way to store secrets in SSM Parameter Store. In fact, I tweeted:

Just as we were all working out the details of using SSM Parameter Store to manage our secrets…

Another tool to help build securely (looking forward to learning about it).

#AWSsummit San Francisco” – @esh 2018-04-04 11:10

When I learned the AWS Secrets Manager pricing plan, my questions deepened.

So, I started poring over the AWS Secrets Manager documentation and slowly strarted to gain possible enlightenment.

I have archived below three stream of consciousness threads that I originally posted to Twitter.

Thoughts 1: Secret Rotation Is The Value In AWS Secrets Manager

After reading the new AWS Secrets Manager docs, it looks like there is a lot of value in the work Amazon has invested into the design of rotating secrets.

There are a number of different ways systems support secrets, and various failure scenarios that must be accounted for.

Though RDS secret rotation support is built in to AWS Secrets Manager, customers are going to find more value in the ability to plug in custom code to rotate secrets in any service–using AWS Lambda, naturally.

Customers write the code that performs the proper steps, and AWS Secrets Manager will drive the steps.

It almost looks like we could take the secret rotation framework design and develop AWS Step Functions and CloudWatch Events Schedule to drive rotation for secrets in SSM Parameter Store,

but for such a critical piece of security infrastructure execution, it makes sense to lean on Amazon to maintain this part and drive the rotation steps reliably.

There are ways to create IAM policies that are fine-tuned with just the AWS Secrets Manager permissions needed, including conditions on custom tags on each secret.

When designing AWS Secrets Manager, I suspect there were discussions inside Amazon about whether ASM itself should perform the steps to move the current/pending/previous version labels around during secret rotation, to reduce the risk of customer code doing this incorrectly.

I think this may have required giving the ASM service too much permission to manipulate the customer’s secrets, so the decision seems right to keep this with the customer’s AWS Lambda function, even though there is some added complexity in development.

The AWS Secrets Manager documentation is impressively clear for creating custom AWS Lambda functions for secret rotation, especially for how complex the various scenarios can be.

Here’s a link to the AWS Secrets Manager Use Guide. https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html

Thoughts 2: Rotating Secrets Frequently Is Important

I’m starting to understand that it’s “AWS Secrets Manager” not “AWS Secrets Store”, and that the biggest part of that management seem to be the automated, transparent, reliable secret rotation.

Now that I can see a smooth path to regular secret rotation with AWS Secrets Manager, I’m starting to feel like I’ve been living in primitive times letting my database passwords sit at the same value for months (ok, years).

Guest Post: Jennine Townsend on AWS Documentation

A guest post authored by Jennine Townsend, sysadmin extraordinaire, and perhaps the AWS Doc team’s biggest fan.

I’ve always believed in reading vendor documentation and now that AWS is by far my largest vendor I’m focussed on their documentation. It’s not enough to just read it, though, since AWS is constantly making changes and releasing features, so even the documentation that I’ve already read and think I know will be different tomorrow.

Putting together a list of my favorite or most-hated documentation would just result in a list of what I worked on last week, so instead I thought it would be more interesting to point out some “meta” documentation – docs about the docs, and docs about what I think of as the “meta” services, and a few pages that you’ve read – but they’ve changed since you last read them!

AWS meta documentation:

Documentation about AWS meta services:

IAM and STS:

CloudFormation:

Five of the seven links on the CloudFormation Template Reference page were orange (visited recently) for me just now, but these in particular are always open in a tab while I’m writing CloudFormation:

Information from the AWS General Reference:

Replacing EC2 On-Demand Instances With New Spot Instances

with an SMS text warning two minutes before interruption, using CloudWatch Events Rules And SNS

The EC2 Spot instance marketplace has had a number of enhancements in the last couple months that have made it more attractive for more use cases. Improvements include:

  • You can run an instance like you normally do for on-demand instances and add one option to make it a Spot instance! The instance starts up immediately if your bid price is sufficient given spot market conditions, and will generally cost much less than on-demand.

  • Spot price volatility has been significantly reduced. Spot prices are now based on long-term trends in supply and demand instead of hour-to-hour bidding wars. This means that instances are much less likely to be interrupted because of short-term spikes in Spot prices, leading to much longer running instances on average.

  • You no longer have to specify a bid price. The Spot Request will default to the instance type’s on-demand price in that region. This saves looking up pricing information and is a reasonable default if you are using Spot to save money over on-demand.

  • CloudWatch Events can now send a two-minute warning before a Spot instance is interrupted, through email, text, AWS Lambda, and more.

Putting these all together makes it easy to take instances you formerly ran on-demand and add an option to turn them into new Spot instances. They are much less likely to be interrupted than with the old spot market, and you can save a little to a lot in hourly costs, depending on the instance type, region, and availability zone.

Plus, you can get a warning a couple minutes before the instance is interrupted, giving you a chance to save work or launch an alternative. This warning could be handled by code (e.g., AWS Lambda) but this article is going to show how to get the warning by email and by SMS text message to your phone.

WARNING!

You should not run a Spot instance unless you can withstand having the instance stopped for a while from time to time.

Make sure you can easily start a replacement instance if the Spot instance is stopped or terminated. This probably includes regularly storing important data outside of the Spot instance (e.g., S3).

You cannot currently re-start a stopped or hibernated Spot instance manually, though the Spot market may re-start it automatically if you configured it with interruption behavior “stop” (or “hibernate”) and if the Spot price comes back down below your max bid.

If you can live with these conditions and risks, then perhaps give this approach a try.

Start An EC2 Instance With A Spot Request

An aws-cli command to launch an EC2 instance can be turned into a Spot Request by adding a single parameter: --instance-market-options ...

The option parameters we will use do not specify a max bid, so it defaults to the on-demand price for the instance type in the region. We specify “stop” and “persistent” so that the instance will be restarted automatically if it is interrupted temporarily by a rising Spot market price that then comes back down.

Adjust the following options to suite. The important part for this example is the instance market options.

ami_id=ami-c62eaabe # Ubuntu 16.04 LTS Xenial HVM EBS us-west-2 (as of post date)
region=us-west-2
instance_type=t2.small
instance_market_options="MarketType='spot',SpotOptions={InstanceInterruptionBehavior='stop',SpotInstanceType='persistent'}"
instance_name="Temporary Demo $(date +'%Y-%m-%d %H:%M')"

instance_id=$(aws ec2 run-instances \
  --region "$region" \
  --instance-type "$instance_type" \
  --image-id "$ami_id" \
  --instance-market-options "$instance_market_options" \
  --tag-specifications \
    'ResourceType=instance,Tags=[{Key="Name",Value="'"$instance_name"'"}]' \
  --output text \
  --query 'Instances[*].InstanceId')
echo instance_id=$instance_id

Other options can be added as desired. For example, specify an ssh key for the instance with an option like:

  --key $USER

and a user-data script with:

  --user-data file:///path/to/user-data-script.sh

If there is capacity, the instance will launch immediately and be available quickly. It can be used like any other instance that is launched outside of the Spot market. However, this instance has the risk of being stopped, so make sure you are prepared for this.

The next section presents a way to get the early warning before the instance is interrupted.

Listing AWS Accounts in an AWS Organization

using aws-cli with a human-oriented output format

If you keep creating AWS accounts for every project, as I do, then you will build up a large inventory of accounts. Occasionally, you might want to get a list of all of the accounts for easy review.

The following simple aws-cli command pipeline:

  • filters to just the active (non-deleted) AWS accounts
  • sorts by account create/join time

and outputs the following values in nicely aligned columns:

  • AWS account id
  • account email address
  • account name

Here’s the command pipeline (contains some bash-ism):

aws organizations list-accounts \
  --output text \
  --query 'Accounts[?Status==`ACTIVE`][Status,JoinedTimestamp,Id,Email,Name]' |
  sort |
  cut -f2- |
  column -t -n -e -s$'\cI'

Here is a sample of what the output might look like:

Streaming AWS DeepLens Video Over SSH

instead of connecting to the DeepLens with HDMI micro cable, monitor, keyboard, mouse

Credit for this excellent idea goes to Ernie Kim. Thank you!

Instructions without ssh

The standard AWS DeepLens instructions recommend connecting the device to a monitor, keyboard, and mouse. The instructions provide information on how to view the video streams in this mode:

If you are connected to the DeepLens using a monitor, you can view the unprocessed device stream (raw camera video before being processed by the model) using this command on the DeepLens device:

mplayer –demuxer lavf /opt/awscam/out/ch1_out.h264

If you are connected to the DeepLens using a monitor, you can view the project stream (video after being processed by the model on the DeepLens) using this command on the DeepLens device:

mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 /tmp/results.mjpeg

Instructions with ssh

You can also view the DeepLens video streams over ssh, without having a monitor connected to the device. To make this possible, you need to enable ssh access on your DeepLens. This is available as a checkbox option in the initial setup of the device. I’m working to get instructions on how to enable ssh access afterwards and will update once this is available.

To view the video streams over ssh, we take the same mplayer command options above and the same source stream files, but send the stream over ssh, and feed the result to the stdin of an mplayer process running on the local system, presumably a laptop.

All of the following commands are run on your local laptop (not on the DeepLens device).

You need to know the IP address of your DeepLens device on your local network:

ip_address=[IP ADDRESS OF DeepLens]

You will need to install the mplayer software on your local laptop. This varies with your OS, but for Ubuntu:

sudo apt-get install mplayer

You can view the unprocessed device stream (raw camera video before being processed by the model) over ssh using the command:

ssh aws_cam@$ip_address cat /opt/awscam/out/ch1_out.h264 |
  mplayer –demuxer lavf -cache 8092 - 

You can view the project stream (video after being processed by the model on the DeepLens) over ssh with the command:

ssh aws_cam@$ip_address cat /tmp/\*results.mjpeg |
  mplayer –demuxer lavf -cache 8092 -lavfdopts format=mjpeg:probesize=32 -

Note: The AWS Lambda function running in Greengrass on the AWS DeepLens can send the processed video anywhere it wants. Some of the samples that Amazon provides send to /tmp/results.mjpg, some send to /tmp/ssd_results.mjpeg, and some don’t write processed video anywhere. If you are unsure, perhaps find and read the AWS Lambda function code on the device or in the AWS Lambda web console.

Benefits of using ssh to view the video streams include:

  • You don’t need to have an extra monitor, keyboard, mouse, and micro-HDMI adapter cable.

  • You don’t need to locate the DeepLens close to a monitor, keyboard, mouse.

  • You don’t need to be physically close to the DeepLens when you are viewing the video streams.

For those of us sitting on a couch with a laptop, a DeepLens across the room, and no extra micro-HDMI cable, this is great news!

Bonus

To protect the security of your sensitive DeepLens video feeds:

Creating AWS Accounts From The Command Line With AWS Organizations

Copy+paste some aws-cli commands to add a new AWS account to your AWS Organization

The AWS Organizations service was introduced at AWS re:Invent 2016. The service has some advanced features, but at a minimum, it is a wonderful way to create new accounts easily, with:

  • all accounts under the consolidated billing

  • no need to enter a credit card

  • no need to enter a phone number, answer a call, and key in a confirmation that you are human

  • automatic, pre-defined cross-account IAM role assumable by master account IAM users

  • no need to pick and securely store a password

I create new AWS accounts at the slightest provocation. They don’t cost anything as long as you aren’t using resources, and they are a nice way to keep unrelated projects separate for security, cost monitoring, and just keeping track of what resources belong where.

I will create an AWS account for a new, independent, side project. I will create an account for a weekend hackathon to keep that mess away from anything else I care about. I will even create an account just to test a series of AWS commands for a blog post, making sure that I am not depending on some earlier configurations that might not be in readers’ accounts.

By copying and pasting commands based on the examples in this article, I can create and start using a new AWS account in minutes.

Before You Start

Rewriting TimerCheck.io In Python 3.6 On AWS Lambda With Chalice

If you are using and depending on the TimerCheck.io service, please be aware that the entire code base will be swapped out and replaced with new code before the end of May, 2017.

Ideally, consumers of the TimerCheck.io API will notice no changes, but if you are concerned, you can test out the new implementation using this temporary endpoint: https://new.timercheck.io/

For example:

https://new.timercheck.io/YOURTIMERNAME/60

and

https://new.timercheck.io/YOURTIMERNAME

This new endpoint uses the same timer database, so all timers can be queried and set using either endpoint.

At some point before the end of May, the new code will be activated by the standard https://timercheck.io endpoint.

AWS Community Day San Francisco, June 15, 2017

register today for this free conference with content organized and presented by AWS community experts!

The AWS user community in the Bay Area and (US) West Coast is getting together in San Francisco for a full day of technical content, food, drink, and mixing with other users in the AWS community.

AWS Community Day San Francisco

At AWS Community Day San Francisco, the content is selected and planned by leaders in the AWS community, and all of the speakers are AWS experts from the West Coast community of AWS users, including some AWS User Group leaders and AWS Community Heroes.

Amazon is generously footing the bill for the event space, food, multi-media management, event site hosting, registration, and all the little details that make a big event like this go smoothly, but the content is organized and presented by AWS community leaders and experts, instead of Amazon employees.

“When we learn from the community we learn from others like us. We have a shared perspective as users of AWS. AWS Community Day will offer us a unique opportunity that we cannot get at AWS re:Invent or AWS Summit.”
John Varghese, Leader, Bay Area AWS User Group

This free event is being held at the Marriott Marquis San Francisco on Thursday, June 15, 2017.

Breakfast, lunch, snacks, and happy hour are provided to keep the community fueled for learning and networking. Thanks, Amazon!

“I’m excited to spend a day learning and sharing with the rest of the AWS community. The sessions will be a great mix of practical tips about the services we use constantly, and showcases of new technologies that we aren’t so familiar with. It’s great to hear about best practices from Amazon, but it’ll be even better to learn from our peers’ real world results.”
Ryan Park, Engineering Manager, Slack

There are eight sessions planned in two tracks, with plenty of valuable information for all types of AWS users, and lots of opportunities to meet others in the community and chat about your AWS experiences. The current agenda includes:

Incompatible: Static S3 Website With CloudFront Forwarding All Headers

a small lesson learned in setting up a static web site with S3 and CloudFront

I created a static web site hosted in an S3 bucket named www.example.com (not the real name) and enabled accessing it as a website. I wanted delivery to be fast to everybody around the world, so I created a CloudFront distribution in front of the S3 bucket.

I wanted S3 to automatically add “index.html” to URLs ending in a slash (CloudFront can’t do this), so I configured the CloudFront distribution to access the S3 bucket as a web site using www.example.com.s3-website-us-east-1.amazonaws.com as the origin server.

Before sending all of the www.example.com traffic to the new setup, I wanted to test it, so I added test.example.com to the list of CNAMEs in the CloudFront distribution.

After setting up Route53 so that DNS lookups for test.example.com would resolve to the new CloudFront endpoint, I loaded it in my browser and got the following error: