AWS Lambda Rust docker builder π π¦ π³ 
This docker image extends lambda ci provided
builder docker image, a faithful reproduction of the actual AWS "provided" Lambda runtime environment,
and installs rustup and the stable rust toolchain.
This provides a build environment, consistent with your target execution environment for predicable results.
Tags for this docker image follow the naming convention softprops/lambda-rust:{version}-rust-{rust-stable-version}
where {rust-stable-version}
is a stable version of rust.
You can find a list of available docker tags here
π‘ If you don't find the version you're looking for, please open a new github issue to publish one
You can also depend directly on softprops/lambda-rust:latest
for the most recently published version.
The default docker entrypoint will build a packaged release optimized version your Rust artifact under target/lambda/release
to
isolate the lambda specific build artifacts from your host-local build artifacts.
β οΈ Note: you can switch from therelease
profile to a custom profile likedev
by providing aPROFILE
environment variable set to the name of the desired profile. i.e.-e PROFILE=dev
in your docker run
β οΈ Note: you can include debug symbols in optimized release build binaries by settingDEBUGINFO
. By default, debug symbols will be stripped from the release binary and set aside in a separate .debug file.
You will want to volume mount /code
to the directory containing your cargo project.
You can pass additional flags to cargo
, the Rust build tool, by setting the CARGO_FLAGS
docker env variable
A typical docker run might look like the following.
$ docker run --rm \
-v ${PWD}:/code \
-v ${HOME}/.cargo/registry:/root/.cargo/registry \
-v ${HOME}/.cargo/git:/root/.cargo/git \
softprops/lambda-rust
π‘ The -v (volume mount) flags for
/root/.cargo/{registry,git}
are optional but when supplied, provides a much faster turn around when doing iterative development
If you are using Windows, the command above may need to be modified to include
a BIN
environment variable set to the name of the binary to be build and packaged
$ docker run --rm \
-e BIN={your-binary-name} \
-v ${PWD}:/code \
-v ${HOME}/.cargo/registry:/root/.cargo/registry \
-v ${HOME}/.cargo/git:/root/.cargo/git \
softprops/lambda-rust
For more custom codebases, the '-w' argument can be used to override the working directory. This can be especially useful when using path dependencies for local crates.
$ docker run --rm \
-v ${PWD}/lambdas/mylambda:/code/lambdas/mylambda \
-v ${PWD}/libs/mylib:/code/libs/mylib \
-v ${HOME}/.cargo/registry:/root/.cargo/registry \
-v ${HOME}/.cargo/git:/root/.cargo/git \
-w /code/lambdas/mylambda \
softprops/lambda-rust
If you want to customize certain parts of the build process, you can leverage hooks that this image provides. Hooks are just shell scripts that are invoked in a specific order, so you can customize the process as you wish. The following hooks exist:
install
: run beforecargo build
- useful for installing native dependencies on the lambda environmentbuild
: run aftercargo build
, but before packaging the executable into a zip - useful when modifying the executable after compilationpackage
: run after packaging the executable into a zip - useful for adding extra files into the zip file
The hooks' names are predefined and must be placed in a directory .lambda-rust
in the project root.
You can take a look at an example here.
Once you've built a Rust lambda function artifact, the provided
runtime expects
deployments of that artifact to be named "bootstrap". The lambda-rust
docker image
builds a zip file, named after the binary, containing your binary files renamed to "bootstrap" for you.
You can invoke this bootstap executable with the lambda-ci docker image for the provided
AWS lambda runtime with a one off container.
# start a one-off docker container replicating the "provided" lambda runtime
# awaiting an event to be provided via stdin
$ unzip -o \
target/lambda/release/{your-binary-name}.zip \
-d /tmp/lambda && \
docker run \
-i -e DOCKER_LAMBDA_USE_STDIN=1 \
--rm \
-v /tmp/lambda:/var/task:ro,delegated \
lambci/lambda:provided
# provide an event payload via stdin (typically a json blob)
# Ctrl-D to yield control back to your function
If you may find the one-off container less than ideal for if you wish to trigger your lambda multiple times. For these cases try using the "stay open" mode of execution.
# start a long running docker container replicating the "provided" lambda runtime
# listening on port 9001
$ unzip -o \
target/lambda/release/{your-binary-name}.zip \
-d /tmp/lambda && \
docker run \
--rm \
-v /tmp/lambda:/var/task:ro,delegated \
-e DOCKER_LAMBDA_STAY_OPEN=1 \
-p 9001:9001 \
lambci/lambda:provided
In a separate terminal, you can invoke your function with curl
The -d
flag is a means of providing your function's input.
$ curl -d '{}' \
http://localhost:9001/2015-03-31/functions/myfunction/invocations
You can also the aws
cli to invoke your function locally. The --payload
is a means of providing your function's input.
$ aws lambda invoke \
--endpoint http://localhost:9001 \
--cli-binary-format raw-in-base64-out \
--no-sign-request \
--function-name myfunction \
--payload '{}' out.json \
&& cat out.json \
&& rm -f out.json
A third party cargo subcommand exists to compile your code into a zip file and deploy it. This comes with only rust and docker as dependencies.
Setup
$ cargo install cargo-aws-lambda
To compile and deploy in your project directory
$ cargo aws-lambda {your aws function's full ARN} {your-binary-name}
To list all options
$ cargo aws-lambda --help
More instructions can be found here.
Doug Tangren (softprops) 2020