Low Latency Clusters

Configure the WarpStream Agent with S3 Express to reduce Produce latency.

By default, WarpStream is tuned for maximum throughput and minimal costs at the expense of higher latency. However, WarpStream clusters can be tuned to provide much lower Produce latency in exchange for higher costs.

Batch Timeout

The WarpStream Agents accept a -batchTimeout (WARPSTREAM_BATCH_TIMEOUT environment variable) that controls how long the Agents will buffer data in-memory before flushing it to object storage. Produce requests are never acknowledged back to the client before data is durably persisted in object storage, so this option has no impact on durability or correctness, but it does directly impact the latency of Produce requests.

The default batchTimeout in the Agents is 250ms , but the value can be decreased as low as 50ms to reduce Produce latency. Lowering this value will result in higher cloud infrastructure costs because the Agents will have to create more files in object storage and will incur higher PUT request API fees as a result.

Control Plane Latency

Similar to the Agents, the WarpStream control plane batches some virtual cluster operations, resulting in higher latency in exchange for reduced control plane costs. WarpStream does not currently provide a way to tune this value in a self-serve manner, but we can adjust it upon request. Email support@warpstream.com if you need to tune the control plane batching interval.

S3 Express

S3 Express One Zone is a tier of AWS S3 that provides much lower latency for writes and reads, but only stores the data in a single availability zone. The WarpStream Agents have native support for S3 Express and can use it to store newly written data. Combined with a reduced batch timeout, S3 express can reduce the P99 latency of Produce requests to less than 150ms.

This latency improvement comes with tradeoffs that Warpstream helps you mitigate so that you get the best of both worlds. S3 Express offers faster reads and writes, but charges more for storage. It also provides less resilience than S3 "classic", since by default it doesn't duplicate data across multiple zones. However, WarpStream Agents can use different buckets for data ingestion and data compaction. Given a few simple configuration changes, Agents ingest data into S3 Express to reduce Produce request latency, but then compact the data into a regular object storage bucket. Think of this as a form of tiered storage within the object store itself.

This is the recommended way to leverage S3 Express with WarpStream because the storage cost of retaining data in in S3 Express is ~7x higher than regular object storage. And that's before taking replication into account. To prevent availability zone failures from interrupting your agents, your data should be replicated across multiple single-zone buckets. Multi-bucket replication makes S3 Express as resilient as S3 "classic", but drives up your storage costs even further. By restricting S3 Express to data ingestion only, you limit the cost increase to API calls and throughput while saving on storage. For more details on S3 Express One Zone pricing, see AWS's documentation.

The first step to using S3 Express is to create the buckets. This can be done in the AWS console, or by using infrastructure as code like Terraform. Below is a sample Terraform block:

locals {
  # S3 Express may not be available in every zone in a region. This
  # is fine though because we don't get billed for inter-zone networking
  # between EC2 and S3 Express buckets.
  s3_express_zones = ["us-east-1a", "us-east-1d", "us-east-1f"]
}

resource "aws_s3_directory_bucket" "warpstream_s3_express_buckets" {
  count = length(local.s3_express_zones)

  # AZ has to be encoded in this exact format, see docs:
  # https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_directory_bucket
  bucket          = "warpstream_s3_express--${local.s3_express_zones_per_region[count.index]}--x-s3"
  data_redundancy = "SingleAvailabilityZone"
  type            = "Directory"

  location {
    name = local.s3_express_zones_per_region[count.index]
    type = "AvailabilityZone"
  }
}

Note that we created three S3 Express directory buckets. The reason for this is that the WarpStream Agents will flush ingestion files to all S3 Express directory buckets, and then wait for a quorum of acknowledgements before considering the data durable. In the future we will allow more flexible configurations, but for now we require that at least 3 buckets are configured and all writes must succeed to at least 2 buckets before being considered successful.

In addition to creating the buckets, you'll also need to grant your WarpStream Agents' IAM role one extra permission: s3express:CreateSession.

Once you've created the buckets, and updated the WarpStream Agent IAM role, the final step is to change the Agent configuration to write newly ingested data to a quorum of the S3 Express directory buckets instead of the regular object storage bucket. This is done by deleting the -bucketURL flag (WARPSTREAM_BUCKET_URL environment variable) and replacing it with two new flags:

  1. -ingestionBucketURL (WARPSTREAM_INGESTION_BUCKET_URL)

  2. -compactionBucketURL (WARPSTREAM_COMPACTION_BUCKET_URL)

The value of compactionBucketURL should point to a classic S3 bucket configured for Warpstream, i.e. the same value as bucketURL in the default object store configuration.

The value of ingestionBucketURL should be a <> delimited list of S3 Express bucket directory URLs. For example:

s3://warpstream_s3_express--us-east-1a--x-s3?region=us-east-1<>s3://warpstream_s3_express--us-east-1d--x-s3?region=us-east-1<>s3://warpstream_s3_express--us-east-1f--x-s3?region=us-east-1

That's it! The WarpStream Agents will automatically write newly ingested data to a quorum of the S3 Express directory buckets, and then asynchronously compact those files into the regular object storage bucket. The Agents will also automatically take care of deleting files whose data has completely expired from both the S3 Express directory buckets, and the regular object storage bucket.

Last updated

Logo

Apache, Apache Kafka, Kafka, and associated open source project names are trademarks of the Apache Software Foundation. Kinesis is a trademark of Amazon Web Services.