Streaming AWS DeepLens Video Over SSH

instead of connecting to the DeepLens with HDMI micro cable, monitor, keyboard, mouse

Credit for this excellent idea goes to Ernie Kim. Thank you!

Instructions without ssh

The standard AWS DeepLens instructions recommend connecting the device to a monitor, keyboard, and mouse. The instructions provide information on how to view the video streams in this mode:

If you are connected to the DeepLens using a monitor, you can view the unprocessed device stream (raw camera video before being processed by the model) using this command on the DeepLens device:

mplayer –demuxer /opt/awscam/out/ch1_out.h264

If you are connected to the DeepLens using a monitor, you can view the project stream (video after being processed by the model on the DeepLens) using this command on the DeepLens device:

mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 /tmp/ssd_results.mjpeg

Instructions with ssh

You can also view the DeepLens video streams over ssh, without having a monitor connected to the device. To make this possible, you need to enable ssh access on your DeepLens. This is available as a checkbox option in the initial setup of the device. I’m working to get instructions on how to enable ssh access afterwards and will update once this is available.

To view the video streams over ssh, we take the same mplayer command options above and the same source stream files, but send the stream over ssh, and feed the result to the stdin of an mplayer process running on the local system, presumably a laptop.

All of the following commands are run on your local laptop (not on the DeepLens device).

You need to know the IP address of your DeepLens device on your local network:

ip_address=[IP ADDRESS OF DeepLens]

You will need to install the mplayer software on your local laptop. This varies with your OS, but for Ubuntu:

sudo apt-get install mplayer

You can view the unprocessed device stream (raw camera video before being processed by the model) over ssh using the command:

ssh aws_cam@$ip_address cat /opt/awscam/out/ch1_out.h264 |
  mplayer –demuxer -

You can view the project stream (video after being processed by the model on the DeepLens) over ssh with the command:

ssh aws_cam@$ip_address cat /tmp/ssd_results.mjpeg |
  mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 -

Benefits of using ssh to view the video streams include:

  • You don’t need to have an extra monitor, keyboard, mouse, and micro-HDMI adapter cable.

  • You don’t need to locate the DeepLens close to a monitor, keyboard, mouse.

  • You don’t need to be physically close to the DeepLens when you are viewing the video streams.

For those of us sitting on a couch with a laptop, a DeepLens across the room, and no extra micro-HDMI cable, this is great news!

Bonus

To protect the security of your sensitive DeepLens video feeds:

Creating AWS Accounts From The Command Line With AWS Organizations

Copy+paste some aws-cli commands to add a new AWS account to your AWS Organization

The AWS Organizations service was introduced at AWS re:Invent 2016. The service has some advanced features, but at a minimum, it is a wonderful way to create new accounts easily, with:

  • all accounts under the consolidated billing

  • no need to enter a credit card

  • no need to enter a phone number, answer a call, and key in a confirmation that you are human

  • automatic, pre-defined cross-account IAM role assumable by master account IAM users

  • no need to pick and securely store a password

I create new AWS accounts at the slightest provocation. They don’t cost anything as long as you aren’t using resources, and they are a nice way to keep unrelated projects separate for security, cost monitoring, and just keeping track of what resources belong where.

I will create an AWS account for a new, independent, side project. I will create an account for a weekend hackathon to keep that mess away from anything else I care about. I will even create an account just to test a series of AWS commands for a blog post, making sure that I am not depending on some earlier configurations that might not be in readers’ accounts.

By copying and pasting commands based on the examples in this article, I can create and start using a new AWS account in minutes.

Before You Start

Rewriting TimerCheck.io In Python 3.6 On AWS Lambda With Chalice

If you are using and depending on the TimerCheck.io service, please be aware that the entire code base will be swapped out and replaced with new code before the end of May, 2017.

Ideally, consumers of the TimerCheck.io API will notice no changes, but if you are concerned, you can test out the new implementation using this temporary endpoint: https://new.timercheck.io/

For example:

https://new.timercheck.io/YOURTIMERNAME/60

and

https://new.timercheck.io/YOURTIMERNAME

This new endpoint uses the same timer database, so all timers can be queried and set using either endpoint.

At some point before the end of May, the new code will be activated by the standard https://timercheck.io endpoint.

AWS Community Day San Francisco, June 15, 2017

register today for this free conference with content organized and presented by AWS community experts!

The AWS user community in the Bay Area and (US) West Coast is getting together in San Francisco for a full day of technical content, food, drink, and mixing with other users in the AWS community.

AWS Community Day San Francisco

At AWS Community Day San Francisco, the content is selected and planned by leaders in the AWS community, and all of the speakers are AWS experts from the West Coast community of AWS users, including some AWS User Group leaders and AWS Community Heroes.

Amazon is generously footing the bill for the event space, food, multi-media management, event site hosting, registration, and all the little details that make a big event like this go smoothly, but the content is organized and presented by AWS community leaders and experts, instead of Amazon employees.

“When we learn from the community we learn from others like us. We have a shared perspective as users of AWS. AWS Community Day will offer us a unique opportunity that we cannot get at AWS re:Invent or AWS Summit.”
John Varghese, Leader, Bay Area AWS User Group

This free event is being held at the Marriott Marquis San Francisco on Thursday, June 15, 2017.

Breakfast, lunch, snacks, and happy hour are provided to keep the community fueled for learning and networking. Thanks, Amazon!

“I’m excited to spend a day learning and sharing with the rest of the AWS community. The sessions will be a great mix of practical tips about the services we use constantly, and showcases of new technologies that we aren’t so familiar with. It’s great to hear about best practices from Amazon, but it’ll be even better to learn from our peers’ real world results.”
Ryan Park, Engineering Manager, Slack

There are eight sessions planned in two tracks, with plenty of valuable information for all types of AWS users, and lots of opportunities to meet others in the community and chat about your AWS experiences. The current agenda includes:

Incompatible: Static S3 Website With CloudFront Forwarding All Headers

a small lesson learned in setting up a static web site with S3 and CloudFront

I created a static web site hosted in an S3 bucket named www.example.com (not the real name) and enabled accessing it as a website. I wanted delivery to be fast to everybody around the world, so I created a CloudFront distribution in front of the S3 bucket.

I wanted S3 to automatically add “index.html” to URLs ending in a slash (CloudFront can’t do this), so I configured the CloudFront distribution to access the S3 bucket as a web site using www.example.com.s3-website-us-east-1.amazonaws.com as the origin server.

Before sending all of the www.example.com traffic to the new setup, I wanted to test it, so I added test.example.com to the list of CNAMEs in the CloudFront distribution.

After setting up Route53 so that DNS lookups for test.example.com would resolve to the new CloudFront endpoint, I loaded it in my browser and got the following error:

How Much Does It Cost To Run A Serverless API on AWS?

Serving 2.1 million API requests for $11

Folks tend to be curious about how much real projects cost to run on AWS, so here’s a real example with breakdowns by AWS service and feature.

This article walks through the AWS invoice for charges accrued in November 2016 by the TimerCheck.io API service which runs in the us-east-1 (Northern Virginia) region and uses the following AWS services:

  • API Gateway
  • AWS Lambda
  • DynamoDB
  • Route 53
  • CloudFront
  • SNS (Simple Notification Service)
  • CloudWatch Logs
  • CloudWatch Metrics
  • CloudTrail
  • S3
  • Network data transfer
  • CloudWatch Alarms

During this month, TimerCheck.io service processed over 2 million API requests. Every request ran an AWS Lambda function that read from and/or wrote to a DynamoDB table.

This AWS account is older than 12 months, so any first year free tier specials are no longer applicable.

Total Cost Overview

At the very top of the AWS invoice, we can see that the total AWS charges for the month of November add up to $11.12. This is the total bill for processing the 2.1 million API requests and all of the infrastructure necessary to support them.

Invoice: Summary

Amazon Polly Text To Speech With aws-cli and Twilio

Today, Amazon announced a new web service named Amazon Polly, which converts text to speech in a number of languages and voices.

Polly is trivial to use for basic text to speech, even from the command line. Polly also has features that allow for more advanced control of the resulting speech including the use of SSML (Speech Synthesis Markup Language). SSML is familiar to folks already developing Alexa Skills for the Amazon Echo family.

This article describes some simple fooling around I did with this new service.

Deliver Amazon Polly Speech By Phone Call With Twilio

I’ve been meaning to develop some voice applications with Twilio, so I took this opportunity to test Twilio phone calls with speech generated by Amazon Polly. The result sounds a lot better than the default Twilio-generated speech.

The basic approach is:

  1. Generate the speech audio using Amazon Polly.

  2. Upload the resulting audio file to S3.

  3. Trigger a phone call with Twilio, pointing it at the audio file to play once the call is connected.

Here are some sample commands to accomplish this:

1- Generate Speech Audio With Amazon Polly

Watching AWS CloudFormation Stack Status

live display of current event status for each stack resource

Would you like to be able to watch the progress of your new CloudFormation stack resources like this? (press play)

That’s what the output of the new aws-cloudformation-stack-status command looks like when I launch a new AWS Git-backed Static Website CloudFormation stack.

It shows me in real time which resources have completed, which are still in progress, and which, if any, have experienced problems.

Optional Parameters For Pre-existing Resources in AWS CloudFormation Templates

stack creates new AWS resources unless user specifies pre-existing

Background

I like to design CloudFormation templates that create all of the resources necessary to implement the desired functionality without requiring a lot of separate, advanced setup. For example, the AWS Git-backed Static Website creates all of the interesting pieces including a CodeCommit Git repository, S3 buckets for web site content and logging, and even the Route 53 hosted zone.

Creating all of these resources is great if you were starting from scratch on a new project. However, you may sometimes want to use a CloudFormation template to enhance an existing account where one or more of the AWS resources already exist.

For example, consider the case where the user already has a CodeCommit Git repository and a Route 53 hosted zone for their domain. They still want all of the enhanced functionality provided in the Git-backed static website CloudFormation stack, but would rather not have to fork and edit the template code just to fit it in to the existing environment.

What if we could use the same CloudFormation template for different types of situations, sometimes pluging in pre-existing AWS resources, and other times letting the stack create the resources for us?

Solution

Alestic.com Blog Infrastructure Upgrade

publishing new blog posts with “git push”

For the curious, the Alestic.com blog has been running for a while on the Git-backed Static Website Cloudformation stack using the AWS Lambda Static Site Generator Plugin for Hugo.

Not much has changed in the design because I had been using Hugo before. However, Hugo is now automatically run inside of an AWS Lambda function triggered by updates to a CodeCommit Git repository.

It has been a pleasure writing with transparent review and publication processes enabled by Hugo and AWS:

  • When I save a blog post change in my editor (written using Markdown), a local Hugo process on my laptop automatically detects the file change, regenerates static pages, and refreshes the view in my browser.

  • When I commit and push blog post changes to my CodeCommit Git repository, the Git-backed Static Website stack automatically regenerates the static blog site using Hugo and deploys to the live website served by AWS.