LogoLogo
WarpStream.comSlackDiscordContact UsCreate Account
  • Overview
    • Introduction
    • Architecture
      • Service Discovery
      • Write Path
      • Read Path
      • Life of a Request (Simplified)
    • Change Log
  • Getting Started
    • Install the WarpStream Agent / CLI
    • Run the Demo
    • "Hello World" for Apache Kafka
  • BYOC
    • Run the Agents Locally
    • Deploy the Agents
      • Object Storage Configuration
      • Kubernetes Known Issues
      • Rolling Restarts and Upgrades
    • Infrastructure as Code
      • Terraform Provider
      • Helm charts
      • Terraform Modules
    • Monitoring
      • Pre-made Datadog Dashboard
      • Pre-made Grafana Dashboard
      • Important Metrics and Logs
      • Recommended List of Alerts
      • Monitoring Consumer Groups
      • Hosted Prometheus Endpoint
    • Client Configuration
      • Tuning for Performance
      • Configure Clients to Eliminate AZ Networking Costs
        • Force Interzone Load Balancing
      • Configuring Kafka Client ID Features
      • Known Issues
    • Authentication
      • SASL Authentication
      • Mutual TLS (mTLS)
      • Basic Authentication
    • Advanced Agent Deployment Options
      • Agent Roles
      • Agent Groups
      • Protect Data in Motion with TLS Encryption
      • Low Latency Clusters
      • Network Architecture Considerations
      • Agent Configuration Reference
      • Reducing Infrastructure Costs
      • Client Configuration Auto-tuning
    • Hosted Metadata Endpoint
    • Managed Data Pipelines
      • Cookbooks
    • Schema Registry
      • WarpStream BYOC Schema Registry
      • Schema Validation
      • WarpStream Schema Linking
    • Port Forwarding (K8s)
    • Orbit
    • Enable SAML Single Sign-on (SSO)
    • Trusted Domains
    • Diagnostics
      • GoMaxProcs
      • Small Files
  • Reference
    • ACLs
    • Billing
      • Direct billing
      • AWS Marketplace
    • Benchmarking
    • Compression
    • Protocol and Feature Support
      • Kafka vs WarpStream Configuration Reference
      • Compacted topics
    • Secrets Overview
    • Security and Privacy Considerations
    • API Reference
      • API Keys
        • Create
        • Delete
        • List
      • Virtual Clusters
        • Create
        • Delete
        • Describe
        • List
        • DescribeConfiguration
        • UpdateConfiguration
      • Virtual Clusters Credentials
        • Create
        • Delete
        • List
      • Monitoring
        • Describe All Consumer Groups
      • Pipelines
        • List Pipelines
        • Create Pipeline
        • Delete Pipeline
        • Describe Pipeline
        • Create Pipeline Configuration
        • Change Pipeline State
      • Invoices
        • Get Pending Invoice
        • Get Past Invoice
    • CLI Reference
      • warpstream agent
      • warpstream demo
      • warpstream cli
      • warpstream cli-beta
        • benchmark-consumer
        • benchmark-producer
        • console-consumer
        • console-producer
        • consumer-group-lag
        • diagnose-record
        • file-reader
        • file-scrubber
      • warpstream playground
    • Integrations
      • Arroyo
      • AWS Lambda Triggers
      • ClickHouse
      • Debezium
      • Decodable
      • DeltaStream
      • docker-compose
      • DuckDB
      • ElastiFlow
      • Estuary
      • Fly.io
      • Imply
      • InfluxDB
      • Kestra
      • Materialize
      • MinIO
      • MirrorMaker
      • MotherDuck
      • Ockam
      • OpenTelemetry Collector
      • ParadeDB
      • Parquet
      • Quix Streams
      • Railway
      • Redpanda Console
      • RisingWave
      • Rockset
      • ShadowTraffic
      • SQLite
      • Streambased
      • Streamlit
      • Timeplus
      • Tinybird
      • Upsolver
    • Partitions Auto-Scaler (beta)
    • Serverless Clusters
Powered by GitBook
On this page

Was this helpful?

  1. BYOC
  2. Client Configuration

Configure Clients to Eliminate AZ Networking Costs

How to configure your Kafka clients to keep all traffic zone-local.

PreviousTuning for PerformanceNextForce Interzone Load Balancing

Last updated 8 months ago

Was this helpful?

With WarpStream, there are no Availability Zone (AZ) networking costs between agents. This means you can produce and consume data from different AZs without incurring additional networking expenses. The same applies to agent-client communication: WarpStream eliminates AZ networking costs, allowing you to connect clients only to agents within the same AZ.

Requirements for Zonal Alignment of Kafka clients

To ensure your Kafka clients connect to agents within the same availability zone, you need to ensure there is at least one agent in the same availability zone as your clients and provide the Kafka client’s availability zone. There are two ways to provide the availability zone information:

  1. Specifying the availability zone in your client ID

  2. Mapping subnets to availability zones in the Agent configuration

Specifying the availability zone in your client ID

Append the following value to your Kafka client's ClientID: ws_az=<your-az>. This flag indicates the AZ in which the client is operating.

Example

Here is an example of how to set up the clientID with the AZ flag:

availabilityZone := lookupAZ()
clientID := fmt.Sprintf("ws_az=%s",availabilityZone)

Mapping subnets to availability zones in the Agent configuration

Pass a subnet mapping to the agents via the -zonedCIDRBlockscommand-line flag or the WARPSTREAM_ZONED_CIDR_BLOCKS environment variable. This mapping allows the agents to determine which zone the Kafka client is sending traffic from. The value needs to be a <> delimited list of availability zone to CIDR range pairs, where each pair starts with an AZ, a @, and a comma separated list of CIDR blocks representing the range of IPs used by the Kafka clients in that given AZ. For example, us-east-1a@10.0.0.0/19,10.0.32.0/19<>us-east-1b@10.0.64.0/19<>us-east-1c@10.0.96.0/19 indicates that the Kafka clients with IPs that match 10.0.0.0/19 and 10.0.32.0/19 belong to us-east-1a, those with IPs that match 10.0.64.0/19 belong to us-east-1b, and those with IPs that match 10.0.96.0/19 belong to us-east-1c.

Note that if an AZ is appended to the Kafka client's ClientID and a subnet mapping is also provided to the agents but the values conflict with each other, the AZ from the ClientID will be used as the source of truth.

Our has sample code that demonstrates how to query for your application's availability zone in every major cloud.

warpstream-go library