LogoLogo
WarpStream.comSlackDiscordContact UsCreate Account
  • Overview
    • Introduction
    • Architecture
      • Service Discovery
      • Write Path
      • Read Path
      • Life of a Request (Simplified)
    • Change Log
  • Getting Started
    • Install the WarpStream Agent / CLI
    • Run the Demo
    • "Hello World" for Apache Kafka
  • BYOC
    • Run the Agents Locally
    • Deploy the Agents
      • Object Storage Configuration
      • Kubernetes Known Issues
      • Rolling Restarts and Upgrades
    • Infrastructure as Code
      • Terraform Provider
      • Helm charts
      • Terraform Modules
    • Monitoring
      • Pre-made Datadog Dashboard
      • Pre-made Grafana Dashboard
      • Important Metrics and Logs
      • Recommended List of Alerts
      • Monitoring Consumer Groups
      • Hosted Prometheus Endpoint
    • Client Configuration
      • Tuning for Performance
      • Configure Clients to Eliminate AZ Networking Costs
        • Force Interzone Load Balancing
      • Configuring Kafka Client ID Features
      • Known Issues
    • Authentication
      • SASL Authentication
      • Mutual TLS (mTLS)
      • Basic Authentication
    • Advanced Agent Deployment Options
      • Agent Roles
      • Agent Groups
      • Protect Data in Motion with TLS Encryption
      • Low Latency Clusters
      • Network Architecture Considerations
      • Agent Configuration Reference
      • Reducing Infrastructure Costs
      • Client Configuration Auto-tuning
    • Hosted Metadata Endpoint
    • Managed Data Pipelines
      • Cookbooks
    • Schema Registry
      • WarpStream BYOC Schema Registry
      • Schema Validation
      • WarpStream Schema Linking
    • Port Forwarding (K8s)
    • Orbit
    • Enable SAML Single Sign-on (SSO)
    • Trusted Domains
    • Diagnostics
      • GoMaxProcs
      • Small Files
  • Reference
    • ACLs
    • Billing
      • Direct billing
      • AWS Marketplace
    • Benchmarking
    • Compression
    • Protocol and Feature Support
      • Kafka vs WarpStream Configuration Reference
      • Compacted topics
    • Secrets Overview
    • Security and Privacy Considerations
    • API Reference
      • API Keys
        • Create
        • Delete
        • List
      • Virtual Clusters
        • Create
        • Delete
        • Describe
        • List
        • DescribeConfiguration
        • UpdateConfiguration
      • Virtual Clusters Credentials
        • Create
        • Delete
        • List
      • Monitoring
        • Describe All Consumer Groups
      • Pipelines
        • List Pipelines
        • Create Pipeline
        • Delete Pipeline
        • Describe Pipeline
        • Create Pipeline Configuration
        • Change Pipeline State
      • Invoices
        • Get Pending Invoice
        • Get Past Invoice
    • CLI Reference
      • warpstream agent
      • warpstream demo
      • warpstream cli
      • warpstream playground
    • Integrations
      • Arroyo
      • AWS Lambda Triggers
      • ClickHouse
      • Debezium
      • Decodable
      • DeltaStream
      • docker-compose
      • DuckDB
      • ElastiFlow
      • Estuary
      • Fly.io
      • Imply
      • InfluxDB
      • Kestra
      • Materialize
      • MinIO
      • MirrorMaker
      • MotherDuck
      • Ockam
      • OpenTelemetry Collector
      • ParadeDB
      • Parquet
      • Quix Streams
      • Railway
      • Redpanda Console
      • RisingWave
      • Rockset
      • ShadowTraffic
      • SQLite
      • Streambased
      • Streamlit
      • Timeplus
      • Tinybird
      • Upsolver
    • Partitions Auto-Scaler (beta)
    • Serverless Clusters
Powered by GitBook
On this page
  • Prerequisites
  • Step 1: Get WarpStream data
  • Step 2: Configure a Kafka source in InfluxDB
  • Step 3: Editing the Telegraf script
  • Step 4: Run Telegraf
  • Next Steps

Was this helpful?

  1. Reference
  2. Integrations

InfluxDB

InfluxDB is an open-source time series database that is a perfect companion to WarpStream's Apache Kafka-compatible clusters.

PreviousImplyNextKestra

Last updated 8 months ago

Was this helpful?

A video walkthrough can be found below:

Prerequisites

  1. A WarpStream cluster is up and running.

Step 1: Get WarpStream data

You will need to collect and save the following pieces of information from WarpStream to be used in InfluxDB before you get started:

  • Bootstrap broker

  • SASL Username

  • SASL Password

  • Topic

The first three items can be easily attained by creating a new credential for your cluster in the web UI.

Step 2: Configure a Kafka source in InfluxDB

Starting in the InfluxDB UI, select the up arrow icon in the left navigation to see available sources. Many options exist, so we enter Kafka to narrow it down. There are three results, and it is unclear which one to select; in our case, we are using the middle one that says Kafka Consumer.

On the next screen, select Use this plugin and Create a new configuration.

Give the configuration a name, and if you create a new bucket, select that option, which we will use for this example.

When you create a bucket, you have some options. For this example, we will call the bucket sensors to match the Topic feeding it.

Step 3: Editing the Telegraf script

InfluxDB has a pipeline tool called Telegraf that is coupled with InfluxDB. After you selected CREATE at the end of the previous step, you'll be presented with the script for you to update to match your needs. There are several areas to modify, but starting at the top, you want to replace the brokers with the value you saved for the Bootstrap broker in Step 1, and the Topic from Step 1 goes into topics to replace the value telegraf.

Next, find the SASL segment uncomment sasl_username and sasl_password, then replace the values with those you saved in Step 1. Next, uncomment sasl_mechanism and type PLAIN in the quotes.

Just after sasl_mechanism, add the following two lines:

enable_tls = true
version = "1.0.0"

Then select SAVE & TEST at the bottom.

Step 4: Run Telegraf

The final setup screen you will be presented with is this one:

Save the commands in Steps 2 and 3, then paste them into a terminal where Telegraf is installed. A feature to understand here is that the API token will correlate to the URL for starting Telegraf and will execute the script that you just finished configuring. Editing the Telegraf script on InfluxDB will alter the behavior.

With Telegraf now running, the data in your WarpStream topic will be available in InfluxDB for access using its broad range of tools.

Next Steps

Congratulations! You've set up a Telegraf script to pipe data from WarpStream to InfluxDB.

WarpStream account - get access to WarpStream by registering .

InfluxDB account - get access to InfluxDB by registering .

InfluxDB Telegraf is .

Next, check out the WarpStream docs for configuring the , or review the to learn more about what is possible with WarpStream and InfluxDB!

here
here
installed locally
WarpStream Agent
InfluxDB docs