AWS Lambda Triggers

This page explains how to use WarpStream with AWS's Self Managed Apache Kafka Trigger

This page explains how to use WarpStream with AWS's Self Managed Apache Kafka Trigger to ingest data from a WarpStream cluster and process the records as events in Lambda.

First, you'll need a deployed WarpStream cluster. You can follow our instructions on how to deploy the WarpStream Agents in production, or use our Fly.io or Railway templates.

If the WarpStream Agents are deployed outside of your AWS cloud account (using Fly.io or Railway, for example), you'll also want to familiarize yourself with our instruction on configuring authentication for the WarpStream Agents.

Once the cluster is up and running, navigate to the WarpStream console and click the "credentials" button for your virtual cluster.

Next, click the "Create Credentials" button to create a new set of credentials.

Pick a name for your credentials, then submit. The next screen will present you with your username and password. Save those values temporarily, because we're going to store them in AWS secret manager next.

Go to AWS Secrets Manager in the AWS console and create a new secret.

Once the secret is created, go to AWS Lambda in the AWS console and create a new function.

Edit the code of your lambda function to print out the event.

export const handler = async (event) => {
  // TODO implement
  console.log("my event", JSON.stringify(event));
  const response = {
    statusCode: 200,
    body: JSON.stringify(event),
  };
  return response;
};

Then click deploy.

Next, navigate to the "Configuration" tab and then select "Permissions"

From there, click on the Lambda's role.

Click "Add permissions" then "Attach policies"

Search for SecretsManagerReadWrite and click "Add permissions".

This will give the lambda the ability to read all secrets, so you may want to make the permission more granular specifically to the secret we created above.

The permission should be added successfully.

Once that's done, navigate back to the Lambda and click "Add trigger".

Then select the Kafka source and fill in the bootstrap host, topic name, and authentication configuration. Make sure you use BASIC_AUTH as the authentication mechanism, and select the secret we created in the previous steps.

Finally, click "Add".

Now that everything is setup, you can produce data to the topic. In this example, we'll use the WarpStream Agent binary (which includes a CLI tool) to produce data.

warpstream kcmd -bootstrap-host matano-test.fly.dev -tls -username ccun_XXXXXXXXX -password ccp_XXXXXXXXXX -type produce -topic test -records hello,world

After producing the data, you should be able to see it in the Lambda's cloud watch logs.

You should also now be able to monitor the state of the AWS Lambda's consumer group in the WarpStream console.

Last updated

Logo

Apache, Apache Kafka, Kafka, and associated open source project names are trademarks of the Apache Software Foundation. Kinesis is a trademark of Amazon Web Services.