LogoLogo
WarpStream.comSlackDiscordContact UsCreate Account
  • Overview
    • Introduction
    • Architecture
      • Service Discovery
      • Write Path
      • Read Path
      • Life of a Request (Simplified)
    • Change Log
  • Getting Started
    • Install the WarpStream Agent / CLI
    • Run the Demo
    • "Hello World" for Apache Kafka
  • BYOC
    • Run the Agents Locally
    • Deploy the Agents
      • Object Storage Configuration
      • Kubernetes Known Issues
      • Rolling Restarts and Upgrades
    • Infrastructure as Code
      • Terraform Provider
      • Helm charts
      • Terraform Modules
    • Monitoring
      • Pre-made Datadog Dashboard
      • Pre-made Grafana Dashboard
      • Important Metrics and Logs
      • Recommended List of Alerts
      • Monitoring Consumer Groups
      • Hosted Prometheus Endpoint
    • Client Configuration
      • Tuning for Performance
      • Configure Clients to Eliminate AZ Networking Costs
        • Force Interzone Load Balancing
      • Configuring Kafka Client ID Features
      • Known Issues
    • Authentication
      • SASL Authentication
      • Mutual TLS (mTLS)
      • Basic Authentication
    • Advanced Agent Deployment Options
      • Agent Roles
      • Agent Groups
      • Protect Data in Motion with TLS Encryption
      • Low Latency Clusters
      • Network Architecture Considerations
      • Agent Configuration Reference
      • Reducing Infrastructure Costs
      • Client Configuration Auto-tuning
    • Hosted Metadata Endpoint
    • Managed Data Pipelines
      • Cookbooks
    • Schema Registry
      • WarpStream BYOC Schema Registry
      • Schema Validation
      • WarpStream Schema Linking
    • Port Forwarding (K8s)
    • Orbit
    • Enable SAML Single Sign-on (SSO)
    • Trusted Domains
    • Diagnostics
      • GoMaxProcs
      • Small Files
  • Reference
    • ACLs
    • Billing
      • Direct billing
      • AWS Marketplace
    • Benchmarking
    • Compression
    • Protocol and Feature Support
      • Kafka vs WarpStream Configuration Reference
      • Compacted topics
    • Secrets Overview
    • Security and Privacy Considerations
    • API Reference
      • API Keys
        • Create
        • Delete
        • List
      • Virtual Clusters
        • Create
        • Delete
        • Describe
        • List
        • DescribeConfiguration
        • UpdateConfiguration
      • Virtual Clusters Credentials
        • Create
        • Delete
        • List
      • Monitoring
        • Describe All Consumer Groups
      • Pipelines
        • List Pipelines
        • Create Pipeline
        • Delete Pipeline
        • Describe Pipeline
        • Create Pipeline Configuration
        • Change Pipeline State
      • Invoices
        • Get Pending Invoice
        • Get Past Invoice
    • CLI Reference
      • warpstream agent
      • warpstream demo
      • warpstream cli
      • warpstream cli-beta
        • benchmark-consumer
        • benchmark-producer
        • console-consumer
        • console-producer
        • consumer-group-lag
        • diagnose-record
        • file-reader
        • file-scrubber
      • warpstream playground
    • Integrations
      • Arroyo
      • AWS Lambda Triggers
      • ClickHouse
      • Debezium
      • Decodable
      • DeltaStream
      • docker-compose
      • DuckDB
      • ElastiFlow
      • Estuary
      • Fly.io
      • Imply
      • InfluxDB
      • Kestra
      • Materialize
      • MinIO
      • MirrorMaker
      • MotherDuck
      • Ockam
      • OpenTelemetry Collector
      • ParadeDB
      • Parquet
      • Quix Streams
      • Railway
      • Redpanda Console
      • RisingWave
      • Rockset
      • ShadowTraffic
      • SQLite
      • Streambased
      • Streamlit
      • Timeplus
      • Tinybird
      • Upsolver
    • Partitions Auto-Scaler (beta)
    • Serverless Clusters
Powered by GitBook
On this page
  • Command Info
  • Command Usage
  • Example

Was this helpful?

  1. Reference
  2. CLI Reference
  3. warpstream cli-beta

benchmark-consumer

Command Info

This tool is used to run a Kafka Consumer Benchmark. For more information on benchmarking see Benchmarking.

Command Usage

Usage of benchmark-consumer:
  -bootstrap-host string
    	kafka bootstrap host (default "localhost")
  -bootstrap-port int
    	kafka bootstrap port (default 9092)
  -client-id string
    	client-id to pass along to kafka (default "warpstream-cli")
  -consumer-group string
    	the consumer group to use to consume, if unset (default) no consumer group is used
  -enable-tls
    	dial with TLS or not
  -fetch-max-bytes int
    	the maximum amount of bytes a broker will try to send during a fetch, this corresponds to the java fetch.max.bytes setting (default 50000000)
  -fetch-max-partition-bytes int
    	the maximum amount of bytes that will be consumed for a single partition in a fetch request, this corresponds to the java max.partition.fetch.bytes setting (default 25000000)
  -from-beginning
    	start with the earliest message present in the topic partition rather than the latest message, when enabled e2e latency can't be calculated
  -kafka-log-level string
    	the log level to set on the kafka client, accepted values are DEBUG, INFO, WARN, ERROR (default "WARN")
  -num-clients int
    	number of kafka clients (default 3)
  -prometheus-port int
    	the port to serve promethes metrics on, -1 to disable (default 8082)
  -sasl-password string
    	password for SASL authentication
  -sasl-scram
    	uses sasl scram authentication (sasl plain by default)
  -sasl-username string
    	username for SASL authentication
  -tls-client-cert-file string
    	path to the X.509 certificate file in PEM format for the client
  -tls-client-key-file string
    	path to the X.509 private key file in PEM format for the client
  -tls-server-ca-cert-file string
    	path to the X.509 certificate file in PEM format for the server certificate authority. If not specified, the host's root certificate pool will be used for server certificate verification.
  -topic string
    	the topic to consume from

Example

This is an example of running the consumer benchmark tool against a local playground WarpStream Cluster with a single client.

The consumer is consuming data in real-time that is being produced from the producer benchmark tool.

$ warpstream cli-beta benchmark-consumer -topic ws-benchmark -num-clients 1

45784 records consumed (436.00 MiB), 9156.80 records/sec (87.33 MiB/sec), 305.785ms min e2e latency, 423.749632ms avg e2e latency, 552.51ms max e2e latency.
49570 records consumed (472.00 MiB), 9914.00 records/sec (94.55 MiB/sec), 245.006ms min e2e latency, 437.278189ms avg e2e latency, 649.385ms max e2e latency.
49070 records consumed (467.00 MiB), 9814.00 records/sec (93.59 MiB/sec), 238.257ms min e2e latency, 428.520332ms avg e2e latency, 628.591ms max e2e latency.
49550 records consumed (472.00 MiB), 9910.00 records/sec (94.51 MiB/sec), 229.308ms min e2e latency, 445.467432ms avg e2e latency, 642.138ms max e2e latency.
49663 records consumed (473.00 MiB), 9932.60 records/sec (94.72 MiB/sec), 307.591ms min e2e latency, 422.433481ms avg e2e latency, 539.169ms max e2e latency.
49697 records consumed (473.00 MiB), 9939.40 records/sec (94.79 MiB/sec), 310.196ms min e2e latency, 425.985862ms avg e2e latency, 620.136ms max e2e latency.
49678 records consumed (473.00 MiB), 9935.60 records/sec (94.75 MiB/sec), 307.071ms min e2e latency, 435.650686ms avg e2e latency, 658.087ms max e2e latency.
Previouswarpstream cli-betaNextbenchmark-producer

Last updated 1 day ago

Was this helpful?