LogoLogo
WarpStream.comSlackDiscordContact UsCreate Account
  • Overview
    • Introduction
    • Architecture
      • Service Discovery
      • Write Path
      • Read Path
      • Life of a Request (Simplified)
    • Change Log
  • Getting Started
    • Install the WarpStream Agent / CLI
    • Run the Demo
    • "Hello World" for Apache Kafka
  • BYOC
    • Run the Agents Locally
    • Deploy the Agents
      • Object Storage Configuration
      • Kubernetes Known Issues
      • Rolling Restarts and Upgrades
    • Infrastructure as Code
      • Terraform Provider
      • Helm charts
      • Terraform Modules
    • Monitoring
      • Pre-made Datadog Dashboard
      • Pre-made Grafana Dashboard
      • Important Metrics and Logs
      • Recommended List of Alerts
      • Monitoring Consumer Groups
      • Hosted Prometheus Endpoint
    • Client Configuration
      • Tuning for Performance
      • Configure Clients to Eliminate AZ Networking Costs
        • Force Interzone Load Balancing
      • Configuring Kafka Client ID Features
      • Known Issues
    • Authentication
      • SASL Authentication
      • Mutual TLS (mTLS)
      • Basic Authentication
    • Advanced Agent Deployment Options
      • Agent Roles
      • Agent Groups
      • Protect Data in Motion with TLS Encryption
      • Low Latency Clusters
      • Network Architecture Considerations
      • Agent Configuration Reference
      • Reducing Infrastructure Costs
      • Client Configuration Auto-tuning
    • Hosted Metadata Endpoint
    • Managed Data Pipelines
      • Cookbooks
    • Schema Registry
      • WarpStream BYOC Schema Registry
      • Schema Validation
      • WarpStream Schema Linking
    • Port Forwarding (K8s)
    • Orbit
    • Enable SAML Single Sign-on (SSO)
    • Trusted Domains
    • Diagnostics
      • GoMaxProcs
      • Small Files
  • Reference
    • ACLs
    • Billing
      • Direct billing
      • AWS Marketplace
    • Benchmarking
    • Compression
    • Protocol and Feature Support
      • Kafka vs WarpStream Configuration Reference
      • Compacted topics
    • Secrets Overview
    • Security and Privacy Considerations
    • API Reference
      • API Keys
        • Create
        • Delete
        • List
      • Virtual Clusters
        • Create
        • Delete
        • Describe
        • List
        • DescribeConfiguration
        • UpdateConfiguration
      • Virtual Clusters Credentials
        • Create
        • Delete
        • List
      • Monitoring
        • Describe All Consumer Groups
      • Pipelines
        • List Pipelines
        • Create Pipeline
        • Delete Pipeline
        • Describe Pipeline
        • Create Pipeline Configuration
        • Change Pipeline State
      • Invoices
        • Get Pending Invoice
        • Get Past Invoice
    • CLI Reference
      • warpstream agent
      • warpstream demo
      • warpstream cli
      • warpstream cli-beta
        • benchmark-consumer
        • benchmark-producer
        • console-consumer
        • console-producer
        • consumer-group-lag
        • diagnose-record
        • file-reader
        • file-scrubber
      • warpstream playground
    • Integrations
      • Arroyo
      • AWS Lambda Triggers
      • ClickHouse
      • Debezium
      • Decodable
      • DeltaStream
      • docker-compose
      • DuckDB
      • ElastiFlow
      • Estuary
      • Fly.io
      • Imply
      • InfluxDB
      • Kestra
      • Materialize
      • MinIO
      • MirrorMaker
      • MotherDuck
      • Ockam
      • OpenTelemetry Collector
      • ParadeDB
      • Parquet
      • Quix Streams
      • Railway
      • Redpanda Console
      • RisingWave
      • Rockset
      • ShadowTraffic
      • SQLite
      • Streambased
      • Streamlit
      • Timeplus
      • Tinybird
      • Upsolver
    • Partitions Auto-Scaler (beta)
    • Serverless Clusters
Powered by GitBook
On this page
  • Transactions / Exactly Once Semantics
  • Schema Registry
  • Record Retention Based on Custom Timestamps
  • Supported Clients
  • Known Incompatibilities

Was this helpful?

  1. Reference

Protocol and Feature Support

The current implementation of the Apache Kafka protocol in WarpStream supports the basic ability to create topics, delete topics, produce data, consume data, and use consumer groups to load balance consumers and track offsets. Specifically, the following Apache Kafka messages are currently supported:

  1. Produce

  2. InitProducerID

  3. Fetch

  4. ListOffsets

  5. Metadata

  6. OffsetCommit

  7. OffsetFetch

  8. FindCoordinator

  9. JoinGroup

  10. Heartbeat

  11. SyncGroup

  12. OffsetDelete

  13. ApiVersions

  14. CreateTopics

  15. DeleteTopics

  16. ListGroups

  17. AlterConfigs

  18. DescribeConfigs

  19. DescribeCluster

  20. DescribeGroups

  21. DeleteGroup

  22. LeaveGroup

  23. CreateACLs

  24. DescribeACLs

  25. DeleteACLs

  26. CreatePartitions

  27. AddPartitionsToTxnResponse

  28. AddOffsetsToTxnRequest

  29. EndTxnRequest

  30. TxnOffsetCommitRequest

  31. DescribeTransactionsRequest

  32. ListTransactionsRequest

Note that because of WarpStream's stateless architecture, many of the Apache Kafka protocol messages are irrelevant. For example, messages like:

  1. AlterReplicaLogDirs

  2. ElectLeaders

  3. ListPartitionReassignments

  4. AlterPartitionReassignments

  5. DescribeQuorum

  6. UnregisterBroker

  7. ControlledShutdown

  8. StopReplica

  9. LeaderAndIsr

have no meaning or value when using WarpStream because data durability and replication is managed by the underlying object store. Partitions do not have assigned "leaders", and clean shutdown is automated by virtue of the Agents being stateless.

Transactions / Exactly Once Semantics

WarpStream supports Apache Kafka transactions and Exactly Once Semantics.

To make use of it, please set enable.idempotence to true and add a non-empty transactional.id in your client configuration.

Schema Registry

  • it doesn't support data contracts, so the metadata and ruleSet fields in the schemas are ignored.

  • since metadataisn't supported, it doesn't support the /subjects/{subject}/metadataendpoint

  • it doesn't support /mode endpoints.

  • it doesn't support /exporters endpoints.

  • it doesn't support subject aliases.

  • it doesn't support compatibility groups

Record Retention Based on Custom Timestamps

Kafka implements record retention using the timestamps within the records themselves. If the client sets the timestamp using the CREATE_TIME timestamp type, it can send a record with a timestamp far in the future or the past. This will result in the record being deleted based on the timestamp rather than real-time passing.

However, WarpStream differs in this aspect. In Warpstream, retention is based solely on the real-time when the record was created. Although you can set a custom timestamp for the record, it will not be used to calculate retention. The retention mechanism in Warpstream strictly adheres to the actual creation time of the record.

Supported Clients

Known Incompatibilities

  1. The current implementation any of the __tagged__ fields in the protocol and ignores them entirely.

  2. The current implementation does not enforce throttling and ignores all throttling-related fields/settings.

  3. All Kafka protocol requests have a maximum timeout of 15s (except for JoinGroup and SyncGroup).

PreviousCompressionNextKafka vs WarpStream Configuration Reference

Last updated 5 months ago

Was this helpful?

We're continuously adding support for more Apache Kafka features and message types. Please for specific feature requests or if you notice any discrepancies.

WarpStream has a BYOC Schema Registry built into the Agent binary. Check out the for how it works.

WarpStream BYOC Schema Registry supports most APIs listed in Confluent Schema Registry's. Here are the list of features it doesn't support:

WarpStream should work with any correctly implemented Kafka client. Officially we support , , as well as the standard Java client. Please check with WarpStream.

contact us
documentation
API documentation
franz-go
our documentation on tuning your clients for maximum performance
librdkafka