Multi-Region Clusters

By default, the WarpStream control plane is a single-region service. Since July 2025, we offer a control plane option that is backed by multiple regions at the same time. Paired with a multi-region data plane setup, this allows your workload to sustain the loss of a full cloud provider region while keeping operations running with minimal disruption and no data loss, achieving a Recovery Point Objective of 0.

See our blog post about multi-region Kafka deployments for our detailed recommendations when it comes to streaming data across regions and making your workloads resilient to region-wide cloud provider outages.

How it works

In multi-region mode, your clusters are backed by two or three control plane regions rather than one. You can optionally also choose to back your data plane with multiple buckets in different regions at once, but these are two separate choices. This diagram shows what full WarpStream deployment with multi-region on both the data plane (the agents) and the control plane would look like:

Fully multi-region architecture

For details on the internals of multi-region control planes, see the related blog post. The main idea is that the control plane will replicate your metadata across regions, so that if one fails there is always a copy remaining. The agents that form your data plane will talk to one of these regional control planes at any given time, falling back to the other regions if one of them is degraded. Your agents will only talk to a single region at a given time to avoid writing conflicts on the metadata storage, which would impact throughput and latency.

If you choose to also spread your data plane across multiple regions, your agents will write all of your actual data to a quorum of object storage buckets rather than a single bucket to ensure that losing one region's worth of object storage doesn't cause data loss for the cluster.

Multi-region control plane

Creating a multi-region cluster

To create a multi-region control plane, simply click on the "Enable multi-region support" checkbox on the cluster creation dialog and choose one of the multi-region configurations available.

These configurations are spread across different sets of regions, as detailed in this table:

Configuration
Provider
Region 1
Region 2
Region 3 (if applicable)

multiregion_us1

AWS

us-east-1

us-west-2

N/A

Selecting one of them will tell the control plane to start storing the metadata for your cluster in the corresponding multi-region storage.

This will also give you access to the "Multi-Region" tab on the cluster's detail page, which will allow you to control multi-region specific settings.

Setting up the agents for a multi-region control plane

To get the agents to talk to a multi-region cluster, it's just a matter of setting the right agent flags (or environment variables). Agents use the -multiregion <region_1>,<region_2>,<region_3> flag to know which regions form part of their multi-region control plane. These can be in any order but must be the exact set of regions that appears in the table above. Agents talking to a cluster with the multiregion_us1 configuration should be deployed with the -multiregion flag or WARPSTREAM_MULTIREGION environment variable set to "us-east-1,us-west-2".

Spreading your data plane across multiple regions

The other half of a multi-region deployment is to not only use a multi-region control plane, but also write your data to several object storage buckets in several regions. You can do this simply by setting a warpstream_multi:// destination on your agents startup with the -bucketURL flag, instead of a single s3:// (or other object storage equivalent) destination. The format is warpstream_multi://$BUCKET_1_URL<>$BUCKET_2_URL<>$BUCKET_3_URL . To know how to construct these URLs, please read the Object Storage Configuration page.

Here's an example of a multi-bucket destination with three buckets spread across three AWS regions:

-bucketURL "warpstream_multi://s3://bucket-a?region=us-east-1<>s3://bucket-b?region=us-west-2<>s3://bucket-c?region=us-east-2"

Leader election

To avoid writing conflicts, agents only talk to a single region at a given time. To do this, we run a leader election internally that chooses one of the regions as the current leader.

This is transparent to you and you should get a similar experience regardless of the current leader, with minimal impact other than a brief (seconds) spike in latency in case of a leadership transition.

Automatically choosing the fastest region available

Some of the control planes that conform a multi-region configuration have less latency than others, due to the nature of multi-region storage. By default, the election process runs on what we call "Auto Mode", which will automatically choose one of the regions for you.

You also have the option to not run on Auto Mode, so you can choose a specific region as "preferred". This will tell the control planes to prefer that region as leader if it is healthy. You can do this from the "Multi-Region" section of the control plane. You'll also be able to see which region currently holds leadership.

To learn more about Multi-Region Clusters, contact us.

Last updated

Was this helpful?