Authentication (tls, mTLS, SASL)
This page describes how to configure authentication for the WarpStream Agent so it can be exposed safely over the internet.
The WarpStream Agent offers built-in support for secure authentication and encrypted communication:
SASL Authentication: Protects the Apache Kafka protocol port (default 9092) using SASL/PLAIN or SASL/SCRAM-SHA-512.
TLS Encryption: Secures communication over the Apache Kafka protocol port using TLS/mTLS.
This is useful if you need to access your WarpStream topics from outside your VPC, or integrate with cloud-hosted compute engines like Materialize, ClickHouse, MongoDB Stream Processing, or RisingWave.
Important Note:
SASL authentication only protects the Apache Kafka protocol port (default 9092). The
HTTP port (default 8080)in
the Agent that exposes the distributed file cache and prometheus metrics is not protected by SASL and should not be exposed to the public internet.
TLS termination
You can configure TLS termination in two ways: in the Agent or in a load balancer in front of the agents.
Option 1: In the WarpStream Agent
This method leverages the built-in TLS/mTLS support of the WarpStream Agent. Here's a detailed breakdown of the steps involved:
Enable TLS:
During Agent deployment, include the
-kafkaTLS
flag or set the environment variableWARPSTREAM_TLS_ENABLED=true
. This activates TLS encryption for the Apache Kafka protocol port (default 9092).
Provide Certificates:
Obtain a valid TLS certificate (public key) and the corresponding private key. These should be in PEM-encoded X.509 format.
Use the following flags or environment variables to provide the file paths to the Agent:
-tlsServerCertFile
orWARPSTREAM_TLS_SERVER_CERT_FILE
(for the certificate)-tlsServerPrivateKeyFile
orWARPSTREAM_TLS_SERVER_PRIVATE_KEY_FILE
(for the private key)
Optional: Enable mTLS (Mutual TLS):
If you want to enforce client authentication, add the
-requireMTLSAuthentication
flag or setWARPSTREAM_REQUIRE_MTLS_AUTHENTICATION=true
.This requires clients to present their own valid TLS certificates during connection establishment.
By default, the Agent will use the Distinguished Name(DN) from the client TLS certificate as the principal for ACLs.
A custom TLS mapping rule regex can be provided using the
-tlsPrincipalMappingRule
flag to extract a name from the DN. For example, the ruleCN=([^,]+)
will extract the Common Name(CN) from the DN, and use that as the ACL principal.For example, given the certificate below, and if the
-tlsPrincipalMappingRule
when the Agent is started isCN=([^,]+)
then the nametest_principal
will be used as the mTLS ACL principal.
Optional: Provide a client root CA cert file (mTLS):
If you've enabled mTLS, you can provide a root Certificate Authority (CA) certificate file.
Use the
-tlsClientCACertFile
flag or theWARPSTREAM_TLS_CLIENT_CA_CERT_FILE
environment variable.
Option 2: On a Load Balancer
This method involves placing a load balancer in front of your WarpStream Agents to handle TLS termination. You should only expose the Apache Kafka protocol port, as that is the only one that will be authenticated via SASL. Here's how to configure it:
Configure Load Balancer:
Set up your load balancer (e.g., AWS Network Load Balancer) to listen for incoming traffic on the Apache Kafka protocol port (default 9092).
Ensure that the load balancer terminates TLS connections and forwards decrypted traffic to the WarpStream Agents.
Expose Kafka Port Only:
Configure the load balancer to expose only the Apache Kafka protocol port (9092) to the public internet.
The HTTP port (8080) used for metrics and the distributed file cache should not be publicly accessible.
Update Service Discovery:
By default, the WarpStream Agent is designed to automatically integrate with the WarpStream discovery system, assuming they're running inside a VPC. If the Agent is deployed behind a load balancer performing TLS termination, then the configuration must be modified so that service discovery works properly.
Set the environment variable
WARPSTREAM_DISCOVERY_KAFKA_HOSTNAME_OVERRIDE
to the host/DNS name of the load balancer.If the load balancer uses a non-default port for the Kafka protocol, set
WARPSTREAM_DISCOVERY_KAFKA_PORT_OVERRIDE
to the correct port.
Example: AWS Network Load Balancer (NLB)
If you're using an AWS NLB, the environment variable would look like this:
Example: Fly.io
If you're using an fly.io, the environment variable would look like this:
For more details on how WarpStream and Kafka's service discovery mechanisms interact and why overriding these environment variables is necessary, see our "Service Discovery" reference documentation.
SASL Authentication
The WarpStream Agents support SASL/PLAIN and SASL/SCRAM-SHA-512.
First, deploy the Agent like you normally would following our "Configure the WarpStream Agent for Production" guide. However, you'll need to make one small addition which is to add the -requireSASLAuthentication
flag or add an environment variable: WARPSTREAM_REQUIRE_SASL_AUTHENTICATION=true
.
Once authentication is enabled on the Agent, it will enforce that all Apache Kafka clients that connect to them authenticate themselves via SASL. Otherwise, it will refuse the connection.
In order to connect an Apache Kafka client to the authenticated WarpStream Agent, you'll need to create a set of credentials. You can do that by navigating to the "Virtual Clusters" section of the WarpStream Console and then clicking "View Credentials" on the Virtual Cluster that you want to create a set of credentials for.
Once you're on the credentials view, you can create a new set of SASL credentials by clicking the "New Credentials" button.
Every set of WarpStream SASL credentials are scoped to a specific Virtual Cluster and Agent Pool, so you'll have to select which Agent Pool you'd like to create the credentials for as well.
Once you're done creating the credentials, the admin console will show you the username and password one time. Store these values somewhere safe, as you'll never be able to view them again. WarpStream does not store them in plaintext, so we cannot retrieve them for you.
In the case that you lose your credentials, you can create a new set of credentials in the admin console following the same steps as above, up to a limit of 100 credentials.
Let's test the credentials we just created. Make sure that your Agent is enforcing authentication by trying to connect to it with invalid credentials first:
If the Agent is configured to require authentication, this should result in an error message like:
Now let's try with the valid credentials we created previously:
If you want to use SASL/SCRAM-SHA-512 instead, you can run:
This time you should see a successful message like the following:
Last updated