Deploy the Agents
How to deploy the Agents.
Remember to review our documentation on how to configure your Kafka client for WarpStream, as well as our instructions on tuning for performance once you're done. A few small changes in client configuration can result in 10-20x higher throughput when using WarpStream, and proper client configuration is required to leverage WarpStream's zone-aware discovery system.
Required Arguments
The WarpStream Agent is completely stateless and thus can be deployed however you prefer to deploy stateless containers. For example, you could use AWS ECS or a Kubernetes Deployment.
The WarpStream Docker containers can be found in the installation docs. However, if you're deploying WarpStream into a Kubernetes cluster, we highly recommend using our official Helm charts.
The Agent has four required arguments that must be passed as command line flags:
bucketURL
agentKey
defaultVirtualClusterID
region
For example:
The values of agentKey
, defaultVirtualClusterID
, and region
can be obtained from the WarpStream Admin Console.
Note that the entrypoint for the WarpStream docker image is a multi-command binary. For production usage, the subcommand that you want to run is just called agent
as shown above.
Depending on the tool you're using to deploy/run containers, it can sometimes be cumbersome to provide additional arguments beyond the agent
subcommand.
In that case, all of the required arguments can be passed as environment variables instead:
WARPSTREAM_BUCKET_URL
WARPSTREAM_AGENT_KEY
WARPSTREAM_DEFAULT_VIRTUAL_CLUSTER_ID
WARPSTREAM_REGION
Object Storage
bucketURL
is the URL of the object storage bucket that the WarpStream Agent should write to. See our documentation on how to construct a proper URL for the specific object storage implementation that you're using. The Deploy
tab in the WarpStream UI for your BYOC cluster also has a utility to help you construct a well formed URL.
In addition to constructing a well-formed bucketURL
, you'll also need to create and configure a dedicated object storage bucket for the Agents, and ensure that the Agents have the appropriate permissions to access that bucket. See our documentation on how to do that correctly.
Region
The region
flag corresponds to the region that the WarpStream control plane is running in. This corresponds to the value that was selected when the BYOC cluster was created and can be obtained from the WarpStream UI. This value does not need to correspond to the cloud region that the Agents are deployed in, but you should pick the region that is closest to where your Agents are deployed to minimize latency.
Currently supported regions for BYOC are:
Cloud provider | Region |
---|---|
AWS |
|
AWS |
|
AWS |
|
AWS |
|
GCP |
|
You can contact us to request a new region by sending an email to support@warpstreamlabs.com.
Permissions and Ports
The WarpStream Agents need permission to perform various different operations against the object storage bucket. Review our object storage permissions documentation for more details.
In addition to object storage access, the WarpStream Agent will also need permission to communicate with https://api.prod.$CLUSTER_REGION.warpstream.com
in order to write/read Virtual Cluster metadata. Raw data flowing through your WarpStream will never leave your cloud account, only metadata required to order batches of data and perform remote consensus. You can read more about what metadata leaves your cloud account in our security and privacy considerations documentation.
Finally, the WarpStream Agent requires 2 ports to be exposed. For simplicity, we recommend just ensuring that the WarpStream Agent can listen on ports 9092
and 8080
by default; however, the section below contains more details about how each port is used and how to override them if necessary.
Default: 9092
Override: -kafkaPort $PORT
Disable: -enableKafka false
This is the port that exposes the Kafka TCP protocol to Kafka clients. Only disable it if you don't intend to use the Kafka protocol at all.
Service discovery
The advertiseHostnameStrategy
flag allows you to choose how the agent will advertise itself in Warpstream service discovery (more details here). The default auto-ip4
is a good choice for most cases in production.
GOMAXPROCS
The WarpStream Agent uses heuristics to automatically configure itself based on the available resources. The most important way this happens is by adjusting concurrency and cache sizes based on the number of available cores.
The Agent uses standard operating system APIs to determine how many cores are available, and it prints this value when starting:
This number is usually right, but it may not be right depending on how the Agent is deployed. For example, the Agent may determine the wrong value when running in AWS ECS.
In general, we recommend that you manually set the GOMAXPROCS
environment variable to the number of cores that you've made available to the Agent in your environment. For example, if you've allocated 3 cores to the Agent's container, then we recommend adding GOMAXPROCS=3
as an environment variable.
The value of GOMAXPROCS
must be a whole number and not a fraction. We also recommend that you always assign Agent whole numbers for CPU quotas so that the Agent doesn't have fractional CPU quotas. Fractional CPU quotas can result in throttling and increased latency since the value of GOMAXPROCS
and the number of whole cores available to the Agent won't match.
Instance Selection
While the WarpStream Agents don't store data on local disks, they do use the network heavily. Therefore we recommend using network-optimized cloud instances that provide at least 4GiB of RAM per vCPU. We also recommend using dedicated instances and not bin-packing the Agent containers to avoid noisy neighbor issues where another container running on the same VM as the Agents causes network saturation.
In AWS, we think the m5n
and m6in
series are a great choice for running the Agents.
In GCP, the n4
series is a great choice with the c4
series as a close second.
We recommend running the Agents with at least 2 vCPUs available and providing at least 4 GiB of RAM per vCPU, therefore the m5n.large
and m6in.large
are the minimum recommended instance sizes in AWS.
Using much larger instances is fine as well; just make sure to set the value of GOMAXPROCS to ensure the Agent can make use of all the available cores even when running in a containerized environment (our helm chart does this automatically).
Network Optimized Instances
The Agent does a lot of networking to service Apache Kafka Produce and Fetch requests, as well as perform background compaction. The Agent uses compression and intelligent caching to minimize this, but fundamentally, WarpStream is a data-intensive system that is even more networking-heavy than Apache Kafka due to reliance on remote storage.
Debugging latency caused networking bottlenecks and throttling is a nightmare in all cloud environments. None of the major clouds provide sufficient instrumentation or observability to understand why or if your VM's network is being throttled. Some have dynamic throttling policies that allow long bursts but suddenly degrade with no explanation.
For all of these reasons, we recommend running the WarpStream Agents on network-optimized instances, which allows the Agents to saturate their CPU before saturating the network interface. That situation is easier to understand, observe, and auto-scale on.
Auto-Scaling
When running the Agent on the appropriate instance type as described above, we recommend auto-scaling based on CPU usage with a target of 50% average usage. Our internal testing workload runs the Agent at more than 75% CPU usage with little latency degradation, but choosing an appropriate threshold requires balancing the concerns of cost efficiency and responsiveness to bursts of traffic that happen faster than your auto-scaler can react.
Automatic Availability Zone Detection
By default, the agent will try to reach the cloud provider metadata to detect in which availability zone the agent is running and advertise this to the WarpStream control plane. It currently supports AWS, GCP, and Azure.
If the agent appears with a warpstream-unset-az
availability zone when you look at your virtual cluster in the WarpStream console, then it means it failed to do it. You should have a log error determining availability zone
with possibly an explanation of what went wrong.
For instance, a known issue on AWS EKS is that the hop limit on old EKS node group is 1, preventing the call to AWS metadata from failing. Raising it to 2 should fix the issue (see AWS doc).
As a last resort, you can use the WARPSTREAM_AVAILABILITY_ZONE
environment variable described in the table above to declare the availability zone in which your agent is running.
Last updated