# Protocol and Feature Support

The current implementation of the Apache Kafka protocol in WarpStream supports the basic ability to create topics, delete topics, produce data, consume data, and use consumer groups to load balance consumers and track offsets. Specifically, the following Apache Kafka messages are currently supported:

1. `Produce`
2. `InitProducerID`
3. `Fetch`
4. `ListOffsets`
5. `Metadata`
6. `OffsetCommit`
7. `OffsetFetch`
8. `FindCoordinator`
9. `JoinGroup`
10. `Heartbeat`
11. `SyncGroup`
12. `OffsetDelete`
13. `ApiVersions`
14. `CreateTopics`
15. `DeleteTopics`
16. `ListGroups`
17. `AlterConfigs`
18. `DescribeConfigs`
19. `DescribeCluster`
20. `DescribeGroups`
21. `DeleteGroup`
22. `LeaveGroup`
23. `CreateACLs`
24. `DescribeACLs`
25. `DeleteACLs`
26. `CreatePartitions`
27. `AddPartitionsToTxnResponse`
28. `AddOffsetsToTxnRequest`
29. `EndTxnRequest`
30. `TxnOffsetCommitRequest`
31. `DescribeTransactionsRequest`
32. `ListTransactionsRequest`

We're continuously adding support for more Apache Kafka features and message types. Please [contact us](https://www.warpstream.com/contact-us) for specific feature requests or if you notice any discrepancies.

Note that because of WarpStream's stateless architecture, many of the Apache Kafka protocol messages are irrelevant. For example, messages like:

1. `AlterReplicaLogDirs`
2. `ElectLeaders`
3. `ListPartitionReassignments`
4. `AlterPartitionReassignments`
5. `DescribeQuorum`
6. `UnregisterBroker`
7. `ControlledShutdown`
8. `StopReplica`
9. `LeaderAndIsr`

have no meaning or value when using WarpStream because data durability and replication is managed by the underlying object store. Partitions do not have assigned "leaders", and clean shutdown is automated by virtue of the Agents being stateless.

### Transactions / Exactly Once Semantics

WarpStream supports Apache Kafka transactions and Exactly Once Semantics.

To make use of it, please set `enable.idempotence` to `true` and add a non-empty `transactional.id` in your client configuration.

### Schema Registry

WarpStream has a BYOC Schema Registry built into the Agent binary. Check out the [documentation](https://docs.warpstream.com/warpstream/byoc/schema-registry/warpstream-byoc-schema-registry) for how it works.

WarpStream BYOC Schema Registry supports most APIs listed in Confluent Schema Registry's[ API documentation](https://docs.confluent.io/platform/current/schema-registry/develop/api.html). Here are the list of features it doesn't support:

* it doesn't support data contracts, so the `metadata` and `ruleSet` fields in the `schemas` are ignored.
* since `metadata`isn't supported, it doesn't support the `/subjects/{subject}/metadata`endpoint
* it doesn't support `/mode` endpoints.
* it doesn't support `/exporters` endpoints.
* it doesn't support subject aliases.
* it doesn't support compatibility groups

### Schema Validation

WarpStream supports server-side schema validation which not only validate that the record contains a valid schema ID, but that the record actually conforms to the corresponding schema.

Currently, WarpStream supports two types of schema registries:

* Kafka-compatible Schema Registry
* AWS Glue Schema Registry

It supports the following data formats: Avro, JSON Schema.

For limitations and features not supported, check out the [Enforce Schemas](/warpstream/schema-registry/schema-validation.md#limitations) page.

### Record Retention Based on Custom Timestamps

Kafka implements record retention using the timestamps within the records themselves. If the client sets the timestamp using the `CREATE_TIME` timestamp type, it can send a record with a timestamp far in the future or the past. This will result in the record being deleted based on the timestamp rather than real-time passing.

However, WarpStream differs in this aspect. In Warpstream, retention is based solely on the real-time when the record was created. Although you can set a custom timestamp for the record, it will not be used to calculate retention. The retention mechanism in Warpstream strictly adheres to the actual creation time of the record.

### Supported Clients

WarpStream should work with any correctly implemented Kafka client. Officially we support [librdkafka](#transactions-atomicity), [franz-go](https://github.com/twmb/franz-go), as well as the standard Java client. Please check [our documentation on tuning your clients for maximum performance](/warpstream/kafka/configure-kafka-client/tuning-for-performance.md) with WarpStream.

### Known Incompatibilities

1. The current implementation any of the \_\_tagged\_\_ fields in the protocol and ignores them entirely.
2. The current implementation does not enforce throttling and ignores all throttling-related fields/settings.
3. All Kafka protocol requests have a maximum timeout of 15s (except for JoinGroup and SyncGroup).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.warpstream.com/warpstream/kafka/reference/protocol-and-feature-support.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
