Protocol and Feature Support
The current implementation of the Apache Kafka protocol in WarpStream supports the basic ability to create topics, delete topics, produce data, consume data, and use consumer groups to load balance consumers and track offsets. Specifically, the following Apache Kafka messages are currently supported:
Produce
InitProducerID
Fetch
ListOffsets
Metadata
OffsetCommit
OffsetFetch
FindCoordinator
JoinGroup
Heartbeat
SyncGroup
OffsetDelete
ApiVersions
CreateTopics
DeleteTopics
ListGroups
AlterConfigs
DescribeConfigs
DescribeCluster
DescribeGroups
DeleteGroup
LeaveGroup
CreateACLs
DescribeACLs
DeleteACLs
CreatePartitions
AddPartitionsToTxnResponse
AddOffsetsToTxnRequest
EndTxnRequest
TxnOffsetCommitRequest
DescribeTransactionsRequest
ListTransactionsRequest
We're continuously adding support for more Apache Kafka features and message types. Please contact us for specific feature requests or if you notice any discrepancies.
Note that because of WarpStream's stateless architecture, many of the Apache Kafka protocol messages are irrelevant. For example, messages like:
AlterReplicaLogDirs
ElectLeaders
ListPartitionReassignments
AlterPartitionReassignments
DescribeQuorum
UnregisterBroker
ControlledShutdown
StopReplica
LeaderAndIsr
have no meaning or value when using WarpStream because data durability and replication is managed by the underlying object store. Partitions do not have assigned "leaders", and clean shutdown is automated by virtue of the Agents being stateless.
Transactions / Exactly Once Semantics
WarpStream supports Apache Kafka transactions and Exactly Once Semantics.
To make use of it, please set enable.idempotence
to true
and add a non-empty transactional.id
in your client configuration.
Schema Registry
WarpStream has a BYOC Schema Registry built into the Agent binary. Check out the documentation for how it works.
WarpStream BYOC Schema Registry supports most APIs listed in Confluent Schema Registry's API documentation. Here are the list of features it doesn't support:
it doesn't support data contracts, so the
metadata
andruleSet
fields in theschemas
are ignored.since
metadata
isn't supported, it doesn't support the/subjects/{subject}/metadata
endpointit doesn't support
/mode
endpoints.it doesn't support
/exporters
endpoints.it doesn't support subject aliases.
it doesn't support compatibility groups
Record Retention Based on Custom Timestamps
Kafka implements record retention using the timestamps within the records themselves. If the client sets the timestamp using the CREATE_TIME
timestamp type, it can send a record with a timestamp far in the future or the past. This will result in the record being deleted based on the timestamp rather than real-time passing.
However, WarpStream differs in this aspect. In Warpstream, retention is based solely on the real-time when the record was created. Although you can set a custom timestamp for the record, it will not be used to calculate retention. The retention mechanism in Warpstream strictly adheres to the actual creation time of the record.
Supported Clients
WarpStream should work with any correctly implemented Kafka client. Officially we support librdkafka, franz-go, as well as the standard Java client. Please check our documentation on tuning your clients for maximum performance with WarpStream.
Known Incompatibilities
The current implementation any of the __tagged__ fields in the protocol and ignores them entirely.
The current implementation does not enforce throttling and ignores all throttling-related fields/settings.
All Kafka protocol requests have a maximum timeout of 15s (except for JoinGroup and SyncGroup).
Last updated
Was this helpful?