LogoLogo
WarpStream.comSlackDiscordContact UsCreate Account
  • Overview
    • Introduction
    • Architecture
      • Service Discovery
      • Write Path
      • Read Path
      • Life of a Request (Simplified)
    • Change Log
  • Getting Started
    • Install the WarpStream Agent / CLI
    • Run the Demo
    • "Hello World" for Apache Kafka
  • BYOC
    • Run the Agents Locally
    • Deploy the Agents
      • Object Storage Configuration
      • Kubernetes Known Issues
      • Rolling Restarts and Upgrades
    • Infrastructure as Code
      • Terraform Provider
      • Helm charts
      • Terraform Modules
    • Monitoring
      • Pre-made Datadog Dashboard
      • Pre-made Grafana Dashboard
      • Important Metrics and Logs
      • Recommended List of Alerts
      • Monitoring Consumer Groups
      • Hosted Prometheus Endpoint
    • Client Configuration
      • Tuning for Performance
      • Configure Clients to Eliminate AZ Networking Costs
        • Force Interzone Load Balancing
      • Configuring Kafka Client ID Features
      • Known Issues
    • Authentication
      • SASL Authentication
      • Mutual TLS (mTLS)
      • Basic Authentication
    • Advanced Agent Deployment Options
      • Agent Roles
      • Agent Groups
      • Protect Data in Motion with TLS Encryption
      • Low Latency Clusters
      • Network Architecture Considerations
      • Agent Configuration Reference
      • Reducing Infrastructure Costs
      • Client Configuration Auto-tuning
    • Hosted Metadata Endpoint
    • Managed Data Pipelines
      • Cookbooks
    • Schema Registry
      • WarpStream BYOC Schema Registry
      • Schema Validation
      • WarpStream Schema Linking
    • Port Forwarding (K8s)
    • Orbit
    • Enable SAML Single Sign-on (SSO)
    • Trusted Domains
    • Diagnostics
      • GoMaxProcs
      • Small Files
  • Reference
    • ACLs
    • Billing
      • Direct billing
      • AWS Marketplace
    • Benchmarking
    • Compression
    • Protocol and Feature Support
      • Kafka vs WarpStream Configuration Reference
      • Compacted topics
    • Secrets Overview
    • Security and Privacy Considerations
    • API Reference
      • API Keys
        • Create
        • Delete
        • List
      • Virtual Clusters
        • Create
        • Delete
        • Describe
        • List
        • DescribeConfiguration
        • UpdateConfiguration
      • Virtual Clusters Credentials
        • Create
        • Delete
        • List
      • Monitoring
        • Describe All Consumer Groups
      • Pipelines
        • List Pipelines
        • Create Pipeline
        • Delete Pipeline
        • Describe Pipeline
        • Create Pipeline Configuration
        • Change Pipeline State
      • Invoices
        • Get Pending Invoice
        • Get Past Invoice
    • CLI Reference
      • warpstream agent
      • warpstream demo
      • warpstream cli
      • warpstream cli-beta
        • benchmark-consumer
        • benchmark-producer
        • console-consumer
        • console-producer
        • consumer-group-lag
        • diagnose-record
        • file-reader
        • file-scrubber
      • warpstream playground
    • Integrations
      • Arroyo
      • AWS Lambda Triggers
      • ClickHouse
      • Debezium
      • Decodable
      • DeltaStream
      • docker-compose
      • DuckDB
      • ElastiFlow
      • Estuary
      • Fly.io
      • Imply
      • InfluxDB
      • Kestra
      • Materialize
      • MinIO
      • MirrorMaker
      • MotherDuck
      • Ockam
      • OpenTelemetry Collector
      • ParadeDB
      • Parquet
      • Quix Streams
      • Railway
      • Redpanda Console
      • RisingWave
      • Rockset
      • ShadowTraffic
      • SQLite
      • Streambased
      • Streamlit
      • Timeplus
      • Tinybird
      • Upsolver
    • Partitions Auto-Scaler (beta)
    • Serverless Clusters
Powered by GitBook
On this page

Was this helpful?

  1. BYOC

Client Configuration

This pages explains how to configure your Apache Kafka client with WarpStream.

PreviousHosted Prometheus EndpointNextTuning for Performance

Last updated 3 months ago

Was this helpful?

Don't forget to review our documentation for once you're done. A few small changes in client configuration can result in 10-20x higher throughput when using WarpStream.

WarpStream provides API-compatibility with Apache Kafka, so you can just connect your clients to the WarpStream agents by setting the WarpStream Application Bootstrap URL (obtained from the ) as the value in the Kafka bootstrap settings. For example, using the librdkafka client in Go:

availabilityZone := lookupAZ()
sessionID := uuid.New().String()

clientID := fmt.Sprintf(
	"warpstream_session_id=%s,warpstream_az=%s",
	sessionID, availabilityZone)
bootstrapServer := "api-80ba097c-d4ef-4e0b-8e86-d05b80fee6ed.kafka.discoveryv2.prod-z.us-east-1.warpstream.com:9092"

producerConfig := map[string]kafka.ConfigValue{
	// Obtain your bootstrap URL from the "Deploy" tab of the
	// virtual cluster view in the WarpStream console.
	"bootstrap.servers": bootstrapServer,
	"broker.address.family": "v4",
	"log.connection.close":  "false",
	"client.id": clientID,
}

config := kafka.ConfigMap(producerConfig)
producer, err := kafka.NewProducer(&config)
if err != nil {
	return fmt.Errorf("error initialiazing kafka producer: %w", err)
}

Developing locally? The WarpStream Application Bootstrap URLs don't always work with residential ISPs in the mix. Consider using localhost:9092 as the bootstrap server for local development instead.

If you just want to get up and running quickly, you can use a regular client ID and skip encoding your application's availability zone in the client ID. However, beware that you may incur higher costs due to inter-zone networking.

You can read more about the WarpStream service discovery system in our , but if you want to take advantage of WarpStream's zone-aware service discovery system and achieve good load balancing, you must encode your application's availability zone in your Kafka client's client ID using the format in the code sample above.

Follow to learn how to determine your application's availability zone in all major cloud environments and properly template your client ID.

tuning your Kafka client for maximum performance with WarpStream
WarpStream console
Service Discovery reference documentation
this documentation