Deploy the Agents

Don't forget to review our documentation on how to configure your Kafka client for WarpStream, as well as our instructions on tuning for performance once you're done. A few small changes in client configuration can result in 10-20x higher throughput when using WarpStream, and proper client configuration is required to leverage WarpStream's zone-aware discovery system.

Required Arguments

The WarpStream Agent is completely stateless and thus can be deployed however you prefer to deploy stateless containers. For example, you could use AWS ECS or a Kubernetes Deployment. The WarpStream Docker containers can be found in the "Install the WarpStream Agent" reference.
The Agent has three required arguments that must be passed as command line flags:
  1. 1.
  2. 2.
  3. 3.
For example:
docker run \
agent \
-bucketURL mem://mem_bucket \
-apiKey $YOUR_API_KEY \
-defaultVirtualClusterID $YOUR_VIRTUAL_CLUSTER_ID
The values of apiKey and defaultVirtualClusterID can both be obtained from the WarpStream Admin Console.
Note that the entrypoint for the WarpStream docker image is a multi-command binary. For production usage, the subcommand that you want to run is just called agent as shown above.
Also note that if you're configuring your Agents to use a non-default Virtual Cluster (I.E one you created yourself) then you'll also need to set the agentPoolName flag. See the Agent Configuration and Agent Pools and Virtual Clusters reference documentation for more details about this.
Depending on the tool you're using to deploy/run containers, it can sometimes be cumbersome to provide additional arguments beyond the agent subcommand.
In that case, all of the required arguments can be passed as environment variables instead: WARPSTREAM_BUCKET_URL, WARPSTREAM_API_KEY , WARPSTREAM_DEFAULT_VIRTUAL_CLUSTER_ID, and WARPSTREAM_AGENT_POOL_NAME

Object Storage

The WarpStream Agent manages all data in the object storage warpstream directory. It is extremely important that you allow it to do so alone, and never delete files from the warpstream directory manually. Manually deleting files in the warpstream directory will effectively "brick" a virtual cluster and require that it be recreated from scratch.
bucketURL is the URL of the object storage bucket that the WarpStream Agent should write to. See our dedicated reference page on how to construct a proper URL for the specific object store implementation that you're using.
We highly recommend running the WarpStream Agent with a dedicated bucket for isolation, however, the WarpStream Agent will only write/read data under the warpstream prefix so they can be run in an existing bucket with other workloads in theory.
The WarpStream bucket should not have a configured object retention policy. WarpStream will manage the lifecycle of the objects, including deleting objects that have been compacted or have expired due to retention. If you must configure a retention policy on your bucket, make sure it is significantly longer than the longest retention of any topic/stream in any of your Virtual Clusters to avoid data loss.
We recommend configuring a lifecycle policy for cleaning up aborted multi-part uploads. This will prevent failed file uploads from the WarpStream Agent from accumulating in the bucket forever and increasing your storage costs. Below is a sample Terraform configuration for a WarpStream S3 storage bucket:
resource "aws_s3_bucket" "warpstream_bucket" {
bucket = "my-warpstream-bucket-123"
tags = {
Name = "my-warpstream-bucket-123"
Environment = "staging"
resource "aws_s3_bucket_metric" "warpstream_bucket_metrics" {
bucket =
name = "EntireBucket"
resource "aws_s3_bucket_lifecycle_configuration" "warpstream_bucket_lifecycle" {
bucket =
# Automatically cancel all multi-part uploads after 7d so we don't accumulte an infinite
# number of partial uploads.
rule {
id = "7d multi-part"
status = "Enabled"
abort_incomplete_multipart_upload {
days_after_initiation = 7
# No other lifecycle policy. The WarpStream Agent will automatically clean up and
# deleted expired files.

Permissions and Ports

The WarpStream Agent needs permission to perform Put/Get/List/Delete operations against the configured object storage bucket as described in the previous section. If you need to switch object storage buckets, make sure that the Agent has permission to perform operations on both buckets for at least as long as the maximum retention period of any topic/stream. Once you've ensured the Agent can communicate with both buckets, it is safe to re-deploy the Agent with a different command-line argument for bucketURL.
Below is a example Terraform configuration for an AWS IAM policy document that provides WarpStream with the appropriate permissions to access a dedicated S3 bucket:
data "aws_iam_policy_document" "warpstream_s3_policy_document" {
statement {
sid = "AllowS3"
effect = "Allow"
actions = [
resources = [
In addition to object storage access, the WarpStream Agent will also need permission to communicate with in order to write/read Virtual Cluster metadata. Raw data flowing through your WarpStream will never leave your cloud account, only metadata required to order batches of data and perform remote consensus.
Finally, the WarpStream Agent requires 1-3 ports to be exposed. For maximum compatibility, we recommend just ensuring that the WarpStream Agent can listen on ports 9092 and 8080 by default, however, the section below contains more details about how each port is used and how to override them if necessary.
Kafka Port
Kinesis Port
Default: 9092
Override: -kafkaPort $PORT
Disable: -enableKafka false
This is the port that exposes the Kafka TCP protocol to Kafka clients. Only disable it if you don't intend to use the Kafka protocol at all.
Default: 8080
Override: -httpPort $PORT
Disable: -enableKinesis false
This is the port that exposes the Kinesis HTTP protocol to Kinesis clients. Only disable it if you don't intend to use the Kinesis protocol at all. However, even if you disable it, you'll still need to open port 8080 (or override the value of -httpPort ) so that the WarpStream Agents running in each availability zone can communicate with each other and form a distributed file cache.
Default: 8080
Override: -httpPort $PORT
By default, WarpStream Agents will perform communication amongst themselves on the same port that is used for the Kinesis protocol. However, if the Kinesis protocol is explicitly disabled then the WarpStream agents will still create a server on this port so they can form a distributed file cache with each other in each availability zone and expose Prometheus metrics.

Service discovery

The advertiseHostnameStrategy flag allows you to choose how the agent will advertise itself in Warpstream service discovery (more details here). The default auto-ip4 is a good choice for most cases in production.


The WarpStream Agent uses heuristics to automatically configure itself based on the resources available. The most important way this happens is adjusting concurrency and cache sizes based on the number of available cores.
The Agent uses standard operating system APIs to determine how many cores are available, and it prints this value when starting:
2023/08/31 09:21:22 maxprocs: Leaving GOMAXPROCS=12: CPU quota undefined
This number is usually right, but it may not be right depending on how the Agent is deployed. For example, the Agent may determine the wrong value when running in AWS ECS.
In general, we recommend that you manually set the GOMAXPROCS environment variable to the number of cores that you've made available to the Agent in your environment. For example, if you've allocated 3 cores to the Agent's container, then we recommend adding GOMAXPROCS=3 as an environment variable.
The value of GOMAXPROCS must be a whole number and not a fraction. We also recommend that you always assign Agent whole numbers for CPU quotas so that the Agent doesn't have fractional CPU quotas. Fractional CPU quotas can result in throttling and increased latency since the value of GOMAXPROCS and the number of whole cores available to the Agent won't match.

Graceful Shutdown

The WarpStream Agents perform a graceful shutdown routine that strives to minimize disruption to the cluster. By default, this graceful shutdown process takes 1 minute to complete. However, many container orchestration frameworks will not wait 1 minute for a container to shutdown gracefully. For example, in Kubernetes the default graceful termination window is 30s. We recommend increasing this value to 5 minutes. In Kubernetes, the configuration value for this is called terminationGracePeriodSeconds.
Apache, Apache Kafka, Kafka, and associated open source project names are trademarks of the Apache Software Foundation. Kinesis is a trademark of Amazon Web Services.