AWS Lambda: Pay The Same Price For Faster Execution

multiply the speed of compute-intensive Lambda functions without (much) increase in cost

Given:

  • AWS Lambda duration charges are proportional to the requested memory.

  • The CPU power, network, and disk are proportional to the requested memory.

One could conclude that the charges are proportional to the CPU power available to the Lambda function. If the function completion time is inversely proportional to the CPU power allocated (not entirely true), then the cost remains roughly fixed as you dial up power to make it faster.

If your Lambda function is primarily CPU bound and takes at least several hundred ms to execute, then you may find that you can simply allocate more CPU by allocating more memory, and get the same functionality completed in a shorter time period for about the same cost.

Convert Running EC2 Instance to EBS-Optimized Instance with Provisioned IOPS EBS Volumes

Amazon just announced two related features for getting super-fast, consistent performance with EBS volumes: (1) Provisioned IOPS EBS volumes, and (2) EBS-Optimized Instances.

Starting new instances and EBS volumes with these features is fine, but what if you already have some running instances you’d like to upgrade for faster and more consistent disk performance?

Given the two AWS features, there are two separate powers that need to be engaged to take full advantage:

  1. Convert the EBS volume(s) from standard EBS volumes into new Provisioned IOPS EBS volume(s).

  2. Convert the standard EC2 instance into an EBS-Optimized instance.

This article demonstrates how to take an existing EBS boot instance that is already running and convert it to use both of these two EBS performance features. Note that there will be some increased costs; please study Amazon’s published pricing before attempting.

Using RAID on EC2 EBS Volumes to Break the 1TB Barrier and Increase Performance

Amazon EC2 currently has a limit of 1,000 GB (1 TB) for EBS volumes (Elastic Block Store). It is possible to create file systems larger than this limit using RAID 0 across multiple EBS volumes. Using RAID 0 can also improve the performance of the file system reducing total IO wait as demonstrated in a number of published EBS performance tests.

The following instructions walk through one way to set up RAID 0 across multiple EBS volumes. Note that there is a limit on the size of a file system on 32-bit instances, but 64-bit instances can get unreasonably large. This test was run with 40 EBS volumes of 1,000 GB each for a total of 40,000 GB (40 TB) in the resulting file system.

Actual command line output showing the size of the RAID:

# df /vol
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md0             41942906368      1312 41942905056   1% /vol

# df -h /vol
Filesystem            Size  Used Avail Use% Mounted on
/dev/md0               40T  1.3M   40T   1% /vol

These commands can run in less than 10 minutes and this could probably be reduced further by parallelizing the creation and attaching of the EBS volumes.

Note that the default limit is 20 EBS volumes per EC2 account. You can request an increase from Amazon if you need more.

Caution: 40 TB of EBS storage on EC2 will cost $4,000 per month plus usage charges.

Instructions

Start a 64-bit instance (say, Ubuntu 8.04 Hardy from https://alestic.com). Use your own KEYPAIR:

ec2-run-instances \
  --key KEYPAIR \
  --instance-type c1.xlarge \
  --availability-zone us-east-1a \
  ami-0772946e

Configurable parameters (set on both local host and on EC2 instance):

instanceid=i-XXXXXXXX
volumes=40
size=1000
mountpoint=/vol

On the local host (with EC2 API tools installed)…

Create and attach EBS volumes:

devices=$(perl -e 'for$i("h".."k"){for$j("",1..15){print"/dev/sd$i$j\n"}}'|
           head -$volumes)
devicearray=($devices)
volumeids=
i=1
while [ $i -le $volumes ]; do
  volumeid=$(ec2-create-volume -z us-east-1a --size $size | cut -f2)
  echo "$i: created  $volumeid"
  device=${devicearray[$(($i-1))]}
  ec2-attach-volume -d $device -i $instanceid $volumeid
  volumeids="$volumeids $volumeid"
  let i=i+1
done
echo "volumeids='$volumeids'"

On the EC2 instance (after setting parameters as above)…

Install software:

sudo apt-get update &&
sudo apt-get install -y mdadm xfsprogs

Set up the RAID 0 device:

devices=$(perl -e 'for$i("h".."k"){for$j("",1..15){print"/dev/sd$i$j\n"}}'|
           head -$volumes)

yes | sudo mdadm \
  --create /dev/md0 \
  --level 0 \
  --metadata=1.1 \
  --chunk 256 \
  --raid-devices $volumes \
  $devices

echo DEVICE $devices       | sudo tee    /etc/mdadm.conf
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf

Create the file system (pick your preferred file system type)

sudo mkfs.xfs /dev/md0

Mount:

echo "/dev/md0 $mountpoint xfs noatime 0 0" | sudo tee -a /etc/fstab
sudo mkdir $mountpoint
sudo mount $mountpoint

Check it out:

df -h $mountpoint

When you’re done with it and want to destroy the data and stop paying for storage, tear it down:

sudo umount $mountpoint
sudo mdadm --stop /dev/md0

Terminate the instance:

sudo shutdown -h now

On the local host (with EC2 API tools installed)…

Detach and delete volumes:

for volumeid in $volumeids; do
  ec2-detach-volume $volumeid
done

for volumeid in $volumeids; do
  ec2-delete-volume $volumeid
done

Credits

This article was originally posted on the EC2 Ubuntu group.

Thanks to M. David Peterson for the basic mdadm instructions:

[Update 2012-01-21: Added –chunk 256 based on community recognized best practices.]