Introduction Link to heading

This post is a follow-up to the previous post on Lambda Functions with Go and CDK. In this post, we’ll go through deploying a Rust-based AWS Lambda function using the AWS Cloud Development Kit (CDK). We’ll focus on the developer workflow: how to get started, wire up your Rust Lambda, and use CDK to manage your infrastructure as code. If you’re already familiar with Lambda as a service, this guide will help you get productive with Rust and CDK quickly.

I want to share all the documentation I found useful while writing this post up front, so you can get started quickly. The main resources are:

Why Rust? Link to heading

Rust is a modern, memory-safe, and high-performance language. For Lambda functions, Rust offers fast cold starts and low memory overhead, making it a great fit for event-driven workloads with high volume and low latency. With the growing ecosystem around cargo-lambda and the @cdklabs/aws-lambda-rust CDK construct, deploying Rust Lambdas is getting easier every day.

Project Structure Link to heading

We’ll use a monorepo layout, with the CDK app (TypeScript) and the Rust Lambda function side by side:

my-cdk-rust-lambda/
├── rust_lambda/              # Rust Lambda function
│   ├── src/
│   ├── Cargo.toml
│   └── ...
├── lib/
│   └── lambda-rust-stack.ts  # CDK stack definition
├── bin/
│   └── lambda-rust.ts        # CDK app entrypoint
├── package.json
├── cdk.json
└── ...

Setting Up the CDK Project Link to heading

Start by making sure you have the AWS CDK CLI installed. If you don’t have it yet, you can install it globally with:

npm install -g aws-cdk

You also need to install cargo lambda to easily create the Rust package for your Lambda function.

  1. Initialize a new CDK project:

    mkdir my-cdk-rust-lambda && cd my-cdk-rust-lambda
    cdk init app --language typescript
    npm install @cdklabs/aws-lambda-rust aws-cdk-lib constructs
    
  2. Add your Rust Lambda code:

    cargo lambda new rust_lambda
    # Write your Lambda logic in rust_lambda/src/main.rs
    

Defining the Stack Link to heading

Here’s a minimal CDK stack that deploys a Rust Lambda and an S3 bucket, wiring the bucket to trigger the Lambda on object creation:

import * as rust from '@cdklabs/aws-lambda-rust';
import * as cdk from 'aws-cdk-lib';
import * as s3 from 'aws-cdk-lib/aws-s3';
import * as s3events from 'aws-cdk-lib/aws-s3-notifications';
import { Construct } from 'constructs';

export class LambdaRustStack extends cdk.Stack {
  private readonly rustHandler: rust.RustFunction;
  private readonly bucket: s3.Bucket;
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    this.bucket = new s3.Bucket(this, 'HelloRustBucket', {
      enforceSSL: true,
      removalPolicy: cdk.RemovalPolicy.DESTROY,
      autoDeleteObjects: true
    });

    this.rustHandler = new rust.RustFunction(this, 'HelloRust', {
      binaryName: 'rust_lambda',
      entry: 'rust_lambda'
    });
    this.bucket.grantReadWrite(this.rustHandler);
    this.bucket.addEventNotification(s3.EventType.OBJECT_CREATED, new s3events.LambdaDestination(this.rustHandler));
  }
}

Whenever an object is created in the S3 bucket, the Rust Lambda function will be triggered, and its invocation event will contain a standard S3 event structure. We plan to implement a Lambda that processes the S3 event, for example, by deleting the object after processing it.

Writing the Rust Lambda Link to heading

Our Rust Lambda can use the lambda_runtime and aws-sdk-s3 crates. Here’s a simplified example:

use aws_config::BehaviorVersion;
use aws_sdk_s3::Client as S3Client;
use lambda_runtime::{run, service_fn, tracing, Error};
mod event_handler;
use event_handler::function_handler;

#[tokio::main]
async fn main() -> Result<(), Error> {
    tracing::subscriber::fmt().json().init();
    let shared_config = aws_config::load_defaults(BehaviorVersion::v2025_01_17()).await;
    let s3_client = S3Client::new(&shared_config);
    run(service_fn(|event| function_handler(event, &s3_client))).await
}

Let’s break this down:

  • The lambda_runtime crate provides a couple of things: run, which is the main entry point for our Lambda function, and service_fn, which allows us to define a handler function that will be called with the event data. The tracing crate is used for logging and more, and we initialize it to output JSON logs.
  • The aws_config crate provides implementations of region and credential resolution, which are used throughout the AWS SDK for Rust. The aws_sdk_s3 crate provides the S3 client that we use to interact with S3. Similarly, if we wanted to use DynamoDB, we would use the aws_sdk_dynamodb crate.
  • Notice that we are using tokio, which is an asynchronous runtime for Rust. This allows the main() method to use async/await syntax, which is very useful for I/O-bound tasks like network requests.
  • Finally, some variables are initialized in the main() method, such as shared_config and s3_client. These variables are shared across Lambda invocations because they’re part of the init phase. This is where we want to put our expensive initialization code, as it avoids re-initializing the S3 client on every invocation.

The event_handler module contains the logic for processing S3 events and deleting objects. You can see its full implementation in GitHub — but it’s not very complex; it just extracts the bucket name and object key from the S3 event and deletes the object using the S3 client.

Building and Deploying Link to heading

If you just want to test the Rust code, you can run

cargo test

In Rust, the convention is to put the unit tests in the same file as the code, inside a #[cfg(test)] module. This means that you can run cargo test to run all the tests in your project, including the ones in the event_handler.rs module.

Local Lambda Testing Link to heading

You can test your Lambda locally using cargo-lambda:

cargo lambda watch
# In another terminal:
cargo lambda invoke -F s3-example.json

Keep in mind that since the Lambda is triggered by S3 events, you will need to provide a valid S3 event payload when invoking the Lambda locally. The --data-example s3-put flag provides a sample S3 PUT event.

When we are happy with our Rust code, we can build the Lambda binary and deploy it using CDK:

cdk deploy

The @cdklabs/aws-lambda-rust CDK construct will handle packaging the Rust binary (using a Docker image with the Rust toolchain) and deploying the stack. The cdk deploy command will also create the S3 bucket and wire it up to the Lambda function. Now you can upload files to the S3 bucket, and the Lambda function will be triggered automatically — you can check the logs in CloudWatch to see the Lambda function processing the S3 events and also deleting the objects after processing them.

Architecture Link to heading

Before we finish, I want to talk a little about the architecture of our CDK stack — specifically about using S3 events directly to trigger the Lambda function.

This is fine for a simple use case, like this one, where we just want to process S3 events and delete the objects. However, the main drawback of this approach is that any events that fail to be processed by the Lambda function will be lost. This is a serious issue, because everything breaks all the time in production.

A better approach is to use an SQS queue as an intermediary between the S3 bucket and the Lambda function. This way, we can gracefully handle two main failure modes:

  1. Events that are unable to be processed by the Lambda function (e.g., malformed events, or events that the Lambda function cannot handle). We can report back batch item failures to SQS, so that even though the Lambda function succeeds, the events are sent back to the SQS queue for reprocessing later. If an event is sent back a certain number of times, we can send it to a dead-letter queue (DLQ) for further investigation.
  2. Events that are not processed by the Lambda function due to a transient error (e.g., network issues, or the Lambda function being throttled). In this case, the SQS queue will automatically retry the event because we don’t delete the message from the queue until the Lambda function successfully processes it. This means that we can handle transient errors gracefully, without losing any events.

This retry mechanism is built into integrations between SQS and Lambda, so you don’t have to implement it yourself. You can also use the AWS SDK for Rust to send messages to the SQS queue from your Rust Lambda function, if you need to.

Conclusion Link to heading

With AWS CDK and the Rust Lambda ecosystem, you can quickly get started building high-performance serverless applications. The developer workflow is smooth: write your Lambda in Rust, define your infrastructure in TypeScript, and let CDK handle the rest!

🦀 Happy building 🦀