Creating AWS IAM Access Analyzers In All Regions Of All Accounts

Amazon recently announced the AWS IAM Access Analyzer, a useful tool to help discover if you have granted unintended access to specific types of resources in your AWS account.

At the moment, an Access Analyzer needs to be created in each region of each account where you want to run it.

Since this manual requirement can be a lot of work, it is a common complaint from customers. Given that Amazon listens to customer feedback and since we currently have to specify a “type” of “ACCOUNT”, I expect at some point Amazon may make it easier to run Access Analyzer across all regions and maybe in all accounts in an AWS Organization. Until then…

This article shows how I created an AWS IAM Access Analyzer in all regions of all accounts in my AWS Organization using the aws-cli.

Prerequisites

To make this easy, I use the bash helper functions that I defined in last week’s blog post here:

Running AWS CLI Commands Across All Accounts In An AWS Organization

Please read the blog post to see what assumptions I make about the AWS Organization and account setup. You may need to tweak things if your setup differs from mine.

Here is my GitHub repo that makes it more convenient for me to install the bash functions. If your AWS account structure matches mine sufficiently, it might work for you, too:

https://github.com/alestic/aws-cli-multi-account-sessions

IAM Access Analyzer In All Regions Of Single Account

To start, let’s show how to create an IAM Access Analyzer in all regions of a single account.

Here’s a simple command to get all the regions in the current AWS account:

aws ec2 describe-regions \
  --output text \
  --query 'Regions[][RegionName]'

This command creates an IAM Access Analyzer in a specific region. We’ll tack on a UUID because that’s what Amazon does, though I suspect it’s not really necessary.

region=us-east-1
uuid=$(uuid -v4 -FSIV || echo "1") # may need to install "uuid" command
analyzer="accessanalyzer-$uuid"
aws accessanalyzer create-analyzer \
   --region "$region" \
   --analyzer-name "$analyzer" \
   --type ACCOUNT

By default, there is a limit of a single IAM Access Analyzer per account region. The fact that this is a “default limit” implies that it may be increased by request, but for this guide, we’ll just not create an IAM Access Analyzer if one already exists.

This command lists the name of any IAM Access Analyzers that might already have been created in a region:

region=us-east-1
aws accessanalyzer list-analyzers \
  --region "$region" \
  --output text \
  --query 'analyzers[][name]'

We can put the above together, iterating over the regions, checking to see if an IAM Access Analyzer already exists, and creating one if it doesn’t:

Running AWS CLI Commands Across All Accounts In An AWS Organization

by generating a temporary IAM STS session with MFA then assuming cross-account IAM roles

I recently had the need to run some AWS commands across all AWS accounts in my AWS Organization. This was a bit more difficult to accomplish cleanly than I had assumed it might be, so I present the steps here for me to find when I search the Internet for it in the future.

You are also welcome to try out this approach, though if your account structure doesn’t match mine, it might require some tweaking.

Assumptions And Background

(Almost) all of my AWS accounts are in a single AWS Organization. This allows me to ask the Organization for the list of account ids.

I have a role named “admin” in each of my AWS accounts. It has a lot of power to do things. The default cross-account admin role name for accounts created in AWS Organizations is “OrganizationAccountAccessRole”.

I start with an IAM principal (IAM user or IAM role) that the aws-cli can access through a “source profile”. This principal has the power to assume the “admin” role in other AWS accounts. In fact, that principal has almost no other permissions.

I require MFA whenever a cross-account IAM role is assumed.

You can read about how I set up AWS accounts here, including the above configuration:

Creating AWS Accounts From The Command Line With AWS Organizations

I use and love the aws-cli and bash. You should, too, especially if you want to use the instructions in this guide.

I jump through some hoops in this article to make sure that AWS credentials never appear in command lines, in the shell history, or in files, and are not passed as environment variables to processes that don’t need them (no export).

Setup

For convenience, we can define some bash functions that will improve clarity when we want to run commands in AWS accounts. These freely use bash variables to pass information between functions.

The aws-session-init function obtains temporary session credentials using MFA (optional). These are used to generate temporary assume-role credentials for each account without having to re-enter an MFA token for each account. This function will accept optional MFA serial number and source profile name. This is run once.

aws-session-init() {
  # Sets: source_access_key_id source_secret_access_key source_session_token
  local source_profile=${1:-${AWS_SESSION_SOURCE_PROFILE:?source profile must be specified}}
  local mfa_serial=${2:-$AWS_SESSION_MFA_SERIAL}
  local token_code=
  local mfa_options=
  if [ -n "$mfa_serial" ]; then
    read -s -p "Enter MFA code for $mfa_serial: " token_code
    echo
    mfa_options="--serial-number $mfa_serial --token-code $token_code"
  fi
  read -r source_access_key_id \
          source_secret_access_key \
          source_session_token \
    <<<$(aws sts get-session-token \
           --profile $source_profile \
           $mfa_options \
           --output text \
           --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]')
  test -n "$source_access_key_id" && return 0 || return 1
}
Using AWS Step Functions To Schedule Or Delay SNS Message Publication

with no AWS Lambda function required

A co-worker at Archer asked if there was a way to schedule messages published to an Amazon SNS topic.

I know that scheduling messages to SQS queues is possible to some extent using the DelaySeconds message timer, which allows postponing visibility in the queue up to 15 minutes, but SNS does not currently have native support for delays.

However, since AWS Step Functions has built-in integration with SNS, and since it also has a Wait state that can schedule or delay execution, we can implement a fairly simple Step Functions state machine that puts a delay in front of publishing a message to an SNS topic, without any AWS Lambda code.

Overview

This article uses an AWS CloudFormation template to create a sample AWS stack with one SNS topic and one Step Functions state machine with two states.

AWS architecture diagram

This is the CloudFormation template, if you’d like to review it:

CloudFormation template: aws-sns-delayed

Here is the Step Functions state machine definition from the above CloudFormation template:

{
  "StartAt": "Delay",
  "Comment": "Publish to SNS with delay",
  "States": {
    "Delay": {
      "Type": "Wait",
      "SecondsPath": "$.delay_seconds",
      "Next": "Publish to SNS"
    },
    "Publish to SNS": {
      "Type": "Task",
      "Resource": "arn:aws:states:::sns:publish",
      "Parameters": {
        "TopicArn": "${SNSTopic}",
        "Subject.$": "$.subject",
        "Message.$": "$.message"
      },
      "End": true
    }
  }
}

The “Delay” state waits for “delay_seconds” provided in the input to the state machine execution (as we’ll see below).

The “Publish to SNS” task uses the Step Functions integration with SNS to call the publish API directly with the parameters listed, some of which are also passed in to the state machine execution.

Now let’s take it for a spin!

AWS Solutions Update Feed

a serverless monitoring and alerting service built by Kira Hammond

Amazon recently announced AWS Solutions, a central catalog of well-designed, well-documented, CloudFormation templates that solve common problems, or create standard solution frameworks. My tweet about this announcement garnered more interest than I expected.

One common request was to have a way to be alerted when Amazon publishes new AWS Solutions to this catalog. Kira Hammond (yes relation) has used AWS to built and launched a public service that fills this need.

Kira’s “AWS Solutions Update Feed” monitors the AWS Solutions catalog and posts a message to an SNS topic whenever new solutions are added. The SNS topic is public, so anybody in the world can subscribe to receive these alerts through email, AWS Lambda, or SQS.

Design

Here’s an architecture diagram showing how Kira constructed this monitoring and alerting service using serverless technologies on AWS:

architecture diagram showing data flows that are described next

Flow

The basic operation of the service includes:

  1. A scheduled trigger from a CloudWatch Event Rule runs an AWS Lambda function every N hours.

  2. The AWS Lambda function, written in Python, makes an HTTPS request to the AWS Solutions catalog to download the current list of solutions.

  3. The function retrieves the last known list of solutions from an S3 bucket.

  4. The function compares the previous list with the current list, generating a list of any new AWS Solutions.

  5. If there are any new solutions, a message is posted to a public SNS topic, sending the message to all subscribers.

  6. The current list of solutions is saved to S3 for comparison in the future runs.

Subscribing

If you want to receive alerts when Amazon adds entries to the AWS Solutions catalog, you can subscribe to this public SNS topic:

Using AWS SSM Parameter Store With Git SSH Keys

and employing them securely

At Archer, we have been moving credentials into AWS Systems Manager (SSM) Parameter Store and AWS Secrets Manager. One of the more interesting credentials is an SSH key that is used to clone a GitHub repository into an environment that has IAM roles available (E.g., AWS Lambda, Fargate, EC2).

We’d like to treat this SSH private key as a secret that is stored securely in SSM Parameter Store, with access controlled by AWS IAM, and only retrieve it briefly when it is needed to be used. We don’t even want to store it on disk when it is used, no matter how temporarily.

After a number of design and test iterations with Buddy, here is one of the approaches we ended up with. This is one I like for how clean it is, but may not be what ends up going into the final code.

This solution assumes that you are using bash to run your Git commands, but could be converted to other languages if needed.

Using The Solution

Here is the bash function that retrieves the SSH private key from SSM Parameter Store, adds it to a temporary(!) ssh-agent process, and runs the desired git subcommand using the same temporary ssh-agent process:

git-with-ssm-key()
{
  ssm_key="$1"; shift
  ssh-agent bash -o pipefail -c '
    if aws ssm get-parameter \
         --with-decryption \
         --name "'$ssm_key'" \
         --output text \
         --query Parameter.Value |
       ssh-add -q -
    then
      git "$@"
    else
      echo >&2 "ERROR: Failed to get or add key: '$ssm_key'"
      exit 1
    fi
  ' bash "$@"
}

Here is a sample of how the above bash function might be used to clone a repository using a Git SSH private key stored in SSM Parameter Store under the key “/githubkeys/gitreader”:

git-with-ssm-key /githubsshkeys/gitreader clone git@github.com:alestic/myprivaterepo.git

Other git subcommands can be run the same way. The SSH private key is only kept in memory and only during the execution of the git command.

How It Works

Guest Post: Notable AWS re:invent Sessions, by Jennine Townsend

A guest post authored by Jennine Townsend, expert sysadmin and AWS aficionado

There were so many sessions at re:Invent! Now that it’s over, I want to watch some sessions on video, but which ones?

Of course I’ll pick out those that are specific to my interests, but I also want to know the sessions that had good buzz, so I made a list that’s kind of mashed together from sessions that I heard good things about on Twitter, with those that had lots of repeats and overflow sessions, figuring those must have been popular.

Replacing EC2 On-Demand Instances With New Spot Instances

with an SMS text warning two minutes before interruption, using CloudWatch Events Rules And SNS

The EC2 Spot instance marketplace has had a number of enhancements in the last couple months that have made it more attractive for more use cases. Improvements include:

  • You can run an instance like you normally do for on-demand instances and add one option to make it a Spot instance! The instance starts up immediately if your bid price is sufficient given spot market conditions, and will generally cost much less than on-demand.

  • Spot price volatility has been significantly reduced. Spot prices are now based on long-term trends in supply and demand instead of hour-to-hour bidding wars. This means that instances are much less likely to be interrupted because of short-term spikes in Spot prices, leading to much longer running instances on average.

  • You no longer have to specify a bid price. The Spot Request will default to the instance type’s on-demand price in that region. This saves looking up pricing information and is a reasonable default if you are using Spot to save money over on-demand.

  • CloudWatch Events can now send a two-minute warning before a Spot instance is interrupted, through email, text, AWS Lambda, and more.

Putting these all together makes it easy to take instances you formerly ran on-demand and add an option to turn them into new Spot instances. They are much less likely to be interrupted than with the old spot market, and you can save a little to a lot in hourly costs, depending on the instance type, region, and availability zone.

Plus, you can get a warning a couple minutes before the instance is interrupted, giving you a chance to save work or launch an alternative. This warning could be handled by code (e.g., AWS Lambda) but this article is going to show how to get the warning by email and by SMS text message to your phone.

WARNING!

You should not run a Spot instance unless you can withstand having the instance stopped for a while from time to time.

Make sure you can easily start a replacement instance if the Spot instance is stopped or terminated. This probably includes regularly storing important data outside of the Spot instance (e.g., S3).

You cannot currently re-start a stopped or hibernated Spot instance manually, though the Spot market may re-start it automatically if you configured it with interruption behavior “stop” (or “hibernate”) and if the Spot price comes back down below your max bid.

If you can live with these conditions and risks, then perhaps give this approach a try.

Start An EC2 Instance With A Spot Request

An aws-cli command to launch an EC2 instance can be turned into a Spot Request by adding a single parameter: --instance-market-options ...

The option parameters we will use do not specify a max bid, so it defaults to the on-demand price for the instance type in the region. We specify “stop” and “persistent” so that the instance will be restarted automatically if it is interrupted temporarily by a rising Spot market price that then comes back down.

Adjust the following options to suite. The important part for this example is the instance market options.

ami_id=ami-c62eaabe # Ubuntu 16.04 LTS Xenial HVM EBS us-west-2 (as of post date)
region=us-west-2
instance_type=t2.small
instance_market_options="MarketType='spot',SpotOptions={InstanceInterruptionBehavior='stop',SpotInstanceType='persistent'}"
instance_name="Temporary Demo $(date +'%Y-%m-%d %H:%M')"

instance_id=$(aws ec2 run-instances \
  --region "$region" \
  --instance-type "$instance_type" \
  --image-id "$ami_id" \
  --instance-market-options "$instance_market_options" \
  --tag-specifications \
    'ResourceType=instance,Tags=[{Key="Name",Value="'"$instance_name"'"}]' \
  --output text \
  --query 'Instances[*].InstanceId')
echo instance_id=$instance_id

Other options can be added as desired. For example, specify an ssh key for the instance with an option like:

  --key $USER

and a user-data script with:

  --user-data file:///path/to/user-data-script.sh

If there is capacity, the instance will launch immediately and be available quickly. It can be used like any other instance that is launched outside of the Spot market. However, this instance has the risk of being stopped, so make sure you are prepared for this.

The next section presents a way to get the early warning before the instance is interrupted.

Streaming AWS DeepLens Video Over SSH

instead of connecting to the DeepLens with HDMI micro cable, monitor, keyboard, mouse

Credit for this excellent idea goes to Ernie Kim. Thank you!

Instructions without ssh

The standard AWS DeepLens instructions recommend connecting the device to a monitor, keyboard, and mouse. The instructions provide information on how to view the video streams in this mode:

If you are connected to the DeepLens using a monitor, you can view the unprocessed device stream (raw camera video before being processed by the model) using this command on the DeepLens device:

mplayer –demuxer lavf /opt/awscam/out/ch1_out.h264

If you are connected to the DeepLens using a monitor, you can view the project stream (video after being processed by the model on the DeepLens) using this command on the DeepLens device:

mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 /tmp/results.mjpeg

Instructions with ssh

You can also view the DeepLens video streams over ssh, without having a monitor connected to the device. To make this possible, you need to enable ssh access on your DeepLens. This is available as a checkbox option in the initial setup of the device. I’m working to get instructions on how to enable ssh access afterwards and will update once this is available.

To view the video streams over ssh, we take the same mplayer command options above and the same source stream files, but send the stream over ssh, and feed the result to the stdin of an mplayer process running on the local system, presumably a laptop.

All of the following commands are run on your local laptop (not on the DeepLens device).

You need to know the IP address of your DeepLens device on your local network:

ip_address=[IP ADDRESS OF DeepLens]

You will need to install the mplayer software on your local laptop. This varies with your OS, but for Ubuntu:

sudo apt-get install mplayer

You can view the unprocessed device stream (raw camera video before being processed by the model) over ssh using the command:

ssh aws_cam@$ip_address cat /opt/awscam/out/ch1_out.h264 |
  mplayer –demuxer lavf -cache 8092 - 

You can view the project stream (video after being processed by the model on the DeepLens) over ssh with the command:

ssh aws_cam@$ip_address cat /tmp/\*results.mjpeg |
  mplayer –demuxer lavf -cache 8092 -lavfdopts format=mjpeg:probesize=32 -

Note: The AWS Lambda function running in Greengrass on the AWS DeepLens can send the processed video anywhere it wants. Some of the samples that Amazon provides send to /tmp/results.mjpg, some send to /tmp/ssd_results.mjpeg, and some don’t write processed video anywhere. If you are unsure, perhaps find and read the AWS Lambda function code on the device or in the AWS Lambda web console.

Benefits of using ssh to view the video streams include:

  • You don’t need to have an extra monitor, keyboard, mouse, and micro-HDMI adapter cable.

  • You don’t need to locate the DeepLens close to a monitor, keyboard, mouse.

  • You don’t need to be physically close to the DeepLens when you are viewing the video streams.

For those of us sitting on a couch with a laptop, a DeepLens across the room, and no extra micro-HDMI cable, this is great news!

Bonus

To protect the security of your sensitive DeepLens video feeds:

Rewriting TimerCheck.io In Python 3.6 On AWS Lambda With Chalice

If you are using and depending on the TimerCheck.io service, please be aware that the entire code base will be swapped out and replaced with new code before the end of May, 2017.

Ideally, consumers of the TimerCheck.io API will notice no changes, but if you are concerned, you can test out the new implementation using this temporary endpoint: https://new.timercheck.io/

For example:

https://new.timercheck.io/YOURTIMERNAME/60

and

https://new.timercheck.io/YOURTIMERNAME

This new endpoint uses the same timer database, so all timers can be queried and set using either endpoint.

At some point before the end of May, the new code will be activated by the standard https://timercheck.io endpoint.

Incompatible: Static S3 Website With CloudFront Forwarding All Headers

a small lesson learned in setting up a static web site with S3 and CloudFront

I created a static web site hosted in an S3 bucket named www.example.com (not the real name) and enabled accessing it as a website. I wanted delivery to be fast to everybody around the world, so I created a CloudFront distribution in front of the S3 bucket.

I wanted S3 to automatically add “index.html” to URLs ending in a slash (CloudFront can’t do this), so I configured the CloudFront distribution to access the S3 bucket as a web site using www.example.com.s3-website-us-east-1.amazonaws.com as the origin server.

Before sending all of the www.example.com traffic to the new setup, I wanted to test it, so I added test.example.com to the list of CNAMEs in the CloudFront distribution.

After setting up Route53 so that DNS lookups for test.example.com would resolve to the new CloudFront endpoint, I loaded it in my browser and got the following error:

How Much Does It Cost To Run A Serverless API on AWS?

Serving 2.1 million API requests for $11

Folks tend to be curious about how much real projects cost to run on AWS, so here’s a real example with breakdowns by AWS service and feature.

This article walks through the AWS invoice for charges accrued in November 2016 by the TimerCheck.io API service which runs in the us-east-1 (Northern Virginia) region and uses the following AWS services:

  • API Gateway
  • AWS Lambda
  • DynamoDB
  • Route 53
  • CloudFront
  • SNS (Simple Notification Service)
  • CloudWatch Logs
  • CloudWatch Metrics
  • CloudTrail
  • S3
  • Network data transfer
  • CloudWatch Alarms

During this month, TimerCheck.io service processed over 2 million API requests. Every request ran an AWS Lambda function that read from and/or wrote to a DynamoDB table.

This AWS account is older than 12 months, so any first year free tier specials are no longer applicable.

Total Cost Overview

At the very top of the AWS invoice, we can see that the total AWS charges for the month of November add up to $11.12. This is the total bill for processing the 2.1 million API requests and all of the infrastructure necessary to support them.

Invoice: Summary

Amazon Polly Text To Speech With aws-cli and Twilio

Today, Amazon announced a new web service named Amazon Polly, which converts text to speech in a number of languages and voices.

Polly is trivial to use for basic text to speech, even from the command line. Polly also has features that allow for more advanced control of the resulting speech including the use of SSML (Speech Synthesis Markup Language). SSML is familiar to folks already developing Alexa Skills for the Amazon Echo family.

This article describes some simple fooling around I did with this new service.

Deliver Amazon Polly Speech By Phone Call With Twilio

I’ve been meaning to develop some voice applications with Twilio, so I took this opportunity to test Twilio phone calls with speech generated by Amazon Polly. The result sounds a lot better than the default Twilio-generated speech.

The basic approach is:

  1. Generate the speech audio using Amazon Polly.

  2. Upload the resulting audio file to S3.

  3. Trigger a phone call with Twilio, pointing it at the audio file to play once the call is connected.

Here are some sample commands to accomplish this:

1- Generate Speech Audio With Amazon Polly

Watching AWS CloudFormation Stack Status

live display of current event status for each stack resource

Would you like to be able to watch the progress of your new CloudFormation stack resources like this? (press play)

<asciinema-player cols=“115” rows=“21” autoplay=“1” font-size=“small” theme=“monokai” title=“aws-cloudformation-stack-status (watching stack-create)” author=“Eric Hammond” author-url=“https://twitter.com/esh" src="/asciinema/201611-aws-cloudformation-stack-status-create.rec” poster=“npt:2:20”

That’s what the output of the new aws-cloudformation-stack-status command looks like when I launch a new AWS Git-backed Static Website CloudFormation stack.

It shows me in real time which resources have completed, which are still in progress, and which, if any, have experienced problems.

Alestic.com Blog Infrastructure Upgrade

publishing new blog posts with “git push”

For the curious, the Alestic.com blog has been running for a while on the Git-backed Static Website Cloudformation stack using the AWS Lambda Static Site Generator Plugin for Hugo.

Not much has changed in the design because I had been using Hugo before. However, Hugo is now automatically run inside of an AWS Lambda function triggered by updates to a CodeCommit Git repository.

It has been a pleasure writing with transparent review and publication processes enabled by Hugo and AWS:

  • When I save a blog post change in my editor (written using Markdown), a local Hugo process on my laptop automatically detects the file change, regenerates static pages, and refreshes the view in my browser.

  • When I commit and push blog post changes to my CodeCommit Git repository, the Git-backed Static Website stack automatically regenerates the static blog site using Hugo and deploys to the live website served by AWS.

Running aws-cli Commands Inside An AWS Lambda Function

even though aws-cli is not available by default in AWS Lambda

The AWS Lambda environments for each programming language (e.g., Python, Node, Java) already have the AWS client SDK packages pre-installed for those languages. For example, the Python AWS Lambda environment has boto3 available, which is ideal for connecting to and using AWS services in your function.

This makes it easy to use AWS Lambda as the glue for AWS. A function can be triggered by many different service events, and can respond by reading from, storing to, and triggering other services in the AWS ecosystem.

However, there are a few things that aws-cli currently does better than the AWS SDKs alone. For example, the following command is an efficient way to take the files in a local directory and recursively update a website bucket, uploading (in parallel) files that have changed, while setting important object attributes including MIME types guessing:

aws s3 sync --delete --acl public-read LOCALDIR/ s3://BUCKET/

The aws-cli software is not currently pre-installed in the AWS Lambda environment, but we can fix that with a little effort.