Enable Agent Authentication
This page describes how to configure authentication for the WarpStream Agent so it can be exposed safely over the internet.
The WarpStream Agent has built-in support for SASL/PLAIN and SASL/SCRAM-SHA-512 authentication, which makes it possible to securely expose the Agents over the internet when combined with TLS. This is useful if you need to access your WarpStream topics from outside your VPC, or integrate with cloud-hosted compute engines like Materialize, ClickHouse, MongoDB Stream Processing, or RisingWave.
Before proceeding, please keep in mind this very important point:
SASL authentication only protects the Apache Kafka protocol port (default
9092). The
HTTP port (default 8080)in
the Agent that exposes the distributed file cache and prometheus metrics is not protected by SASL and should not be exposed to the public internet.
TLS termination in the Agent
The Warpstream agents have built-in TLS/mTLS support for the Apache Kafka protocol port(default 9092).
The Agent should not be exposed directly to the internet via plain TCP.
First, deploy the Agent like you normally would following our "Configure the WarpStream Agent for Production" guide. However, first, you'll need to add the -kafkaTLS
flag or add an environment variable: WARPSTREAM_TLS_ENABLED=true
.
The agents require that both certificates and private keys be provided in the PEM-encoded X.509 format. To pass in a certificate and a private key to the agent, you can use the -tlsServerCertFile
and -tlsServerPrivateKeyFile
to pass in the file paths to the agent certificate and private key, respectively. Alternatively, you can use WARPSTREAM_TLS_SERVER_CERT_FILE
and WARPSTREAM_TLS_SERVER_PRIVATE_KEY_FILE
to pass in the file paths for the certificate and private key, respectively.
The Agent also has support for mTLS. To enable mTLS, you'll need to either add an additional -requireMTLSAuthentication
flag, or alternatively set WARPSTREAM_REQUIRE_MTLS_AUTHENTICATION=true
. This will force the Agent to verify client certificates. By default, the Agent will use the Distinguished Name(DN) from the client TLS certificate as the principal for ACLs. A custom TLS mapping rule can be provided using the -tlsPrincipalMappingRule
flag to extract a name from the DN. For example, the rule CN=([^,]+)
will extract the Common Name(CN) from the DN, and use that as the ACL principal.
Optionally, you can also add a file path to the root certificate authority certificate file which the Agent will use to verify the client. Use the -tlsClientCACertFile
flag, or the WARPSTREAM_TLS_CLIENT_CA_CERT_FILE
environment variable.
TLS Termination and Load Balancing
TLS termination can also be done from a load balancer which sits in front of the agents. The load balancer that performs TLS termination for the Agent should only expose the Apache Kafka protocol port, as that is the only one that will be authenticated via SASL.
We will provide more detailed examples on how to configure TLS termination in various environments in the future.
However, one important thing to keep in mind is that by default the WarpStream Agent is designed to automatically integrate with the WarpStream discovery system under the assumption that they're running inside a VPC. If the Agent is deployed behind a load balancer that is performing TLS termination, then the configuration needs to be modified so that service discovery works properly.
Specifically, the WARPSTREAM_DISCOVERY_KAFKA_HOSTNAME_OVERRIDE
environment value should be set to whatever the host / DNS name of the load balancer is. For example, an AWS Network Load Balancer DNS name will look something like:
or if the Agent was deployed on fly.io it would look something like:
If the load balancer is not exposing the Kafka protocol externally over the default port of 9092
, then you will also need to add another environment variable called WARPSTREAM_DISCOVERY_KAFKA_PORT_OVERRIDE
that indicates which port the load balancer is exposing the Kafka protocol over. For example, if it was over port 16553
, then the environment variable would look like:
For more details on how WarpStream and Kafka's service discovery mechanisms interact and why overriding these environment variables is necessary, see our "Service Discovery" reference documentation.
SASL Authentication
The WarpStream Agents support SASL/PLAIN and SASL/SCRAM-SHA-512.
First, deploy the Agent like you normally would following our "Configure the WarpStream Agent for Production" guide. However, you'll need to make one small addition which is to add the -requireAuthentication
flag or add an environment variable: WARPSTREAM_REQUIRE_AUTHENTICATION=true
.
Once authentication is enabled on the Agent, it will enforce that all Apache Kafka clients that connect to them authenticate themselves via SASL. Otherwise, it will refuse the connection.
In order to connect an Apache Kafka client to the authenticated WarpStream Agent, you'll need to create a set of credentials. You can do that by navigating to the "Virtual Clusters" section of the WarpStream Console and then clicking "View Credentials" on the Virtual Cluster that you want to create a set of credentials for.
Once you're on the credentials view, you can create a new set of SASL credentials by clicking the "New Credentials" button.
Every set of WarpStream SASL credentials are scoped to a specific Virtual Cluster and Agent Pool, so you'll have to select which Agent Pool you'd like to create the credentials for as well.
Once you're done creating the credentials, the admin console will show you the username and password one time. Store these values somewhere safe, as you'll never be able to view them again. WarpStream does not store them in plaintext, so we cannot retrieve them for you.
In the case that you lose your credentials, you can create a new set of credentials in the admin console following the same steps as above, up to a limit of 100 credentials.
Let's test the credentials we just created. Make sure that your Agent is enforcing authentication by trying to connect to it with invalid credentials first:
If the Agent is configured to require authentication, this should result in an error message like:
Now let's try with the valid credentials we created previously:
If you want to use SASL/SCRAM-SHA-512 instead, you can run:
This time you should see a successful message like the following:
Last updated