LogoLogo
WarpStream.comSlackDiscordContact UsCreate Account
  • Overview
    • Introduction
    • Architecture
      • Service Discovery
      • Write Path
      • Read Path
      • Life of a Request (Simplified)
    • Change Log
  • Getting Started
    • Install the WarpStream Agent / CLI
    • Run the Demo
    • "Hello World" for Apache Kafka
  • BYOC
    • Run the Agents Locally
    • Deploy the Agents
      • Object Storage Configuration
      • Kubernetes Known Issues
      • Rolling Restarts and Upgrades
    • Infrastructure as Code
      • Terraform Provider
      • Helm charts
      • Terraform Modules
    • Monitoring
      • Pre-made Datadog Dashboard
      • Pre-made Grafana Dashboard
      • Important Metrics and Logs
      • Recommended List of Alerts
      • Monitoring Consumer Groups
      • Hosted Prometheus Endpoint
    • Client Configuration
      • Tuning for Performance
      • Configure Clients to Eliminate AZ Networking Costs
        • Force Interzone Load Balancing
      • Configuring Kafka Client ID Features
      • Known Issues
    • Authentication
      • SASL Authentication
      • Mutual TLS (mTLS)
      • Basic Authentication
    • Advanced Agent Deployment Options
      • Agent Roles
      • Agent Groups
      • Protect Data in Motion with TLS Encryption
      • Low Latency Clusters
      • Network Architecture Considerations
      • Agent Configuration Reference
      • Reducing Infrastructure Costs
      • Client Configuration Auto-tuning
    • Hosted Metadata Endpoint
    • Managed Data Pipelines
      • Cookbooks
    • Schema Registry
      • WarpStream BYOC Schema Registry
      • Schema Validation
      • WarpStream Schema Linking
    • Port Forwarding (K8s)
    • Orbit
    • Enable SAML Single Sign-on (SSO)
    • Trusted Domains
    • Diagnostics
      • GoMaxProcs
      • Small Files
  • Reference
    • ACLs
    • Billing
      • Direct billing
      • AWS Marketplace
    • Benchmarking
    • Compression
    • Protocol and Feature Support
      • Kafka vs WarpStream Configuration Reference
      • Compacted topics
    • Secrets Overview
    • Security and Privacy Considerations
    • API Reference
      • API Keys
        • Create
        • Delete
        • List
      • Virtual Clusters
        • Create
        • Delete
        • Describe
        • List
        • DescribeConfiguration
        • UpdateConfiguration
      • Virtual Clusters Credentials
        • Create
        • Delete
        • List
      • Monitoring
        • Describe All Consumer Groups
      • Pipelines
        • List Pipelines
        • Create Pipeline
        • Delete Pipeline
        • Describe Pipeline
        • Create Pipeline Configuration
        • Change Pipeline State
      • Invoices
        • Get Pending Invoice
        • Get Past Invoice
    • CLI Reference
      • warpstream agent
      • warpstream demo
      • warpstream cli
      • warpstream cli-beta
        • benchmark-consumer
        • benchmark-producer
        • console-consumer
        • console-producer
        • consumer-group-lag
        • diagnose-record
        • file-reader
        • file-scrubber
      • warpstream playground
    • Integrations
      • Arroyo
      • AWS Lambda Triggers
      • ClickHouse
      • Debezium
      • Decodable
      • DeltaStream
      • docker-compose
      • DuckDB
      • ElastiFlow
      • Estuary
      • Fly.io
      • Imply
      • InfluxDB
      • Kestra
      • Materialize
      • MinIO
      • MirrorMaker
      • MotherDuck
      • Ockam
      • OpenTelemetry Collector
      • ParadeDB
      • Parquet
      • Quix Streams
      • Railway
      • Redpanda Console
      • RisingWave
      • Rockset
      • ShadowTraffic
      • SQLite
      • Streambased
      • Streamlit
      • Timeplus
      • Tinybird
      • Upsolver
    • Partitions Auto-Scaler (beta)
    • Serverless Clusters
Powered by GitBook
On this page
  • Prerequisites
  • Step 1: Retrieve the WarpStream Bootstrap Broker URL and SASL credentials
  • Step 2: Update the ElastiFlow YAML config
  • Step 3: Configure your producer
  • Next steps

Was this helpful?

  1. Reference
  2. Integrations

ElastiFlow

ElastiFlow is a network flow and SNMP analytics/monitoring solution that enables insights into network performance and security using NetFlow, IPFIX, sFlow, and SNMP with data platforms like Kafka.

PreviousDuckDBNextEstuary

Last updated 10 months ago

Was this helpful?

Prerequisites

This guide assumes that you are already familiar with or using ElastiFlow and covers what you'll need to configure WarpStream. You'll need:

  1. WarpStream account - get access to WarpStream by registering .

  2. WarpStream cluster is up and running.

Step 1: Retrieve the WarpStream Bootstrap Broker URL and SASL credentials

Obtain the Bootstrap Broker from the WarpStream console by navigating to your cluster and clicking the Connect tab. If you don't have SASL credentials, you can also from the console.

Save these values as you will need them in the next step:

export BOOTSTRAP_HOST=<YOUR_BOOTSTRAP_BROKER> \
SASL_USERNAME=<YOUR_SASL_USERNAME> \
SASL_PASSWORD=<YOUR_SASL_PASSWORD>;

Step 2: Update the ElastiFlow YAML config

Open the following file with your favorite editor and with the required permissions to edit it:

/etc/elastiflow/flowcoll.yml

Next, uncomment the following three fields at the top of the file:

EF_ACCOUNT_ID: "YOUR_ACCOUNT_ID"
#EF_FLOW_LICENSED_UNITS: 0
EF_FLOW_LICENSE_KEY: "<YOUR_LICENSE_KEY>"
EF_LICENSE_ACCEPTED: "true"

You'll need your ElastiFlow Account ID and License Key, then set EF_LICENSE_ACCEPTED to true.

Next, search for the string "KAFKA" and find the Kafka section of the file. Look at the example below for the available options. The uncommented fields are the minimum that need to be filled in.

#EF_OUTPUT_KAFKA_ALLOWED_RECORD_TYPES: as_path_hop,flow_option,flow,ifa_hop,telemetry
EF_OUTPUT_KAFKA_BROKERS: <YOUR_BOOTSTRAP_BROKER:PORT>
#EF_OUTPUT_KAFKA_CLIENT_ID: elastiflow-flowcoll
#EF_OUTPUT_KAFKA_DROP_FIELDS: ""
#EF_OUTPUT_KAFKA_ECS_ENABLE: "false"
EF_OUTPUT_KAFKA_ENABLE: "true"
#EF_OUTPUT_KAFKA_FLAT_RECORD_ENABLE: "true"
#EF_OUTPUT_KAFKA_PARTITION_KEY: flow.export.ip.addr
#EF_OUTPUT_KAFKA_PRODUCER_COMPRESSION: 3
#EF_OUTPUT_KAFKA_PRODUCER_COMPRESSION_LEVEL: -1000
#EF_OUTPUT_KAFKA_PRODUCER_FLUSH_BYTES: 1000000
#EF_OUTPUT_KAFKA_PRODUCER_FLUSH_FREQUENCY: 1000
#EF_OUTPUT_KAFKA_PRODUCER_FLUSH_MAX_MESSAGES: 0
#EF_OUTPUT_KAFKA_PRODUCER_FLUSH_MESSAGES: 1024
#EF_OUTPUT_KAFKA_PRODUCER_MAX_MESSAGE_BYTES: 1000000
#EF_OUTPUT_KAFKA_PRODUCER_REQUIRED_ACKS: 1
#EF_OUTPUT_KAFKA_PRODUCER_RETRY_BACKOFF: 100
#EF_OUTPUT_KAFKA_PRODUCER_RETRY_MAX: 3
#EF_OUTPUT_KAFKA_PRODUCER_TIMEOUT: 10
#EF_OUTPUT_KAFKA_RACK_ID: ""
#EF_OUTPUT_KAFKA_RECORD_TYPE_TOPICS_ENABLE: "false"
EF_OUTPUT_KAFKA_SASL_ENABLE: "true"
EF_OUTPUT_KAFKA_SASL_PASSWORD: "<YOUR_SASL_PASSWORD>"
EF_OUTPUT_KAFKA_SASL_USERNAME: "<YOUR_SASL_USERNAME>"
#EF_OUTPUT_KAFKA_TIMEOUT: 30
#EF_OUTPUT_KAFKA_TIMESTAMP_SOURCE: collect
#EF_OUTPUT_KAFKA_TLS_CA_CERT_FILEPATH: ""
#EF_OUTPUT_KAFKA_TLS_CERT_FILEPATH: ""
#EF_OUTPUT_KAFKA_TLS_ENABLE: "false"
#EF_OUTPUT_KAFKA_TLS_KEY_FILEPATH: ""
#EF_OUTPUT_KAFKA_TLS_SKIP_VERIFICATION: "false"
EF_OUTPUT_KAFKA_TOPIC: elastiflow-flow-codex
#EF_OUTPUT_KAFKA_TOPIC_VERSION: 1.0
#EF_OUTPUT_KAFKA_VERSION: 1.0.0

Step 3: Configure your producer

Now that ElastiFlow is configured to consume from WarpStream, you must update your producer to connect to the same broker and topic using the same security credentials.

Next steps

WarpStream is now part of your ElastiFlow environment and can be used as your Kafka-compatible pipe.

here
create a set of credentials