Kubernetes Known Issues
When running in EKS Availability Zone is Unset or Wrong
Symptom
{"time":"2025-04-02T22:23:46.467567362Z","level":"ERROR","msg":"failed to determine availability zone","git_commit":"32d51900b2423718b692a0edd29b08b11b7dd74e","git_time":"2025-04-02T18:53:04Z","git_modified":false,"go_os":"linux","go_arch":"arm64","process_generation":"081c0596-25c3-4147-88d5-d4416cb6a998","hostname_fqdn":"warp-agent-default-67d9795854-wrwh8","hostname_short":"warp-agent-default-67d9795854-wrwh8","private_ips":["10.0.115.97"],"num_vcpus":3,"kafka_enabled":true,"virtual_cluster_id":"vci_bc62be92_d3ba_4b0c_90e8_4e7bc621a693","module":"agent_azloader","error":{"message":"awsECSErr: missing metadata uri in environment (ECS_CONTAINER_METADATA_URI_V4), likely not running in ECS\nawsEC2Err: error getting metadata: operation error ec2imds: GetMetadata, canceled, context deadline exceeded\ngcpErr: error getting availablity zone: \nazureErr: error getting location: \nk8sErr: unable to get node information: nodes \"i-025487767185742f1\" is forbidden: User \"system:serviceaccount:warpstream:warpstream0-agent\" cannot get resource \"nodes\" in API group \"\" at the cluster scope"}}Context
Problem
Solution
Option A
Option B
Option C
When running in Kubernetes WarpStream pods end up in the same zone or node
Symptom
Context
Problem
Solution
When an IP is reused by another agent's pod
Symptom
Context
Solution
Last updated
Was this helpful?