# Integrations

- [Arroyo](https://docs.warpstream.com/warpstream/reference/integrations/arroyo.md): This page describes how to integrate WarpStream with Arroyo, a distributed stream processing engine written in Rust, that is designed to efficiently perform statement computations on streams of data.
- [AWS Lambda Triggers](https://docs.warpstream.com/warpstream/reference/integrations/aws-lambda-triggers.md): This page explains how to use WarpStream with AWS's Self Managed Apache Kafka Trigger
- [ClickHouse](https://docs.warpstream.com/warpstream/reference/integrations/clickhouse.md): This page describes how to integrate WarpStream with ClickHouse, ingest data from WarpStream into ClickHouse, and query the data in ClickHouse.
- [Debezium](https://docs.warpstream.com/warpstream/reference/integrations/use-warpstream-with-debezium.md): Instructions on how to use WarpStream with Debezium.
- [Decodable](https://docs.warpstream.com/warpstream/reference/integrations/decodable.md): This page describes how to integrate WarpStream with Decodable, ingest data into Decodable from WarpStream, and query the data in Decodable.
- [DeltaStream](https://docs.warpstream.com/warpstream/reference/integrations/deltastream.md): Learn how to connect DeltaStream and WarpStream.
- [docker-compose](https://docs.warpstream.com/warpstream/reference/integrations/use-the-agent-in-docker-compose.md): This page describes how to configure the WarpStream Agent's so they run in docker-compose.
- [DuckDB](https://docs.warpstream.com/warpstream/reference/integrations/duckdb.md): DuckDB is an open-source, column-oriented, relational database management system (RDBMS) designed for analytical processing and interactive querying.
- [ElastiFlow](https://docs.warpstream.com/warpstream/reference/integrations/elastiflow.md): ElastiFlow is a network flow and SNMP analytics/monitoring solution that enables insights into network performance and security using NetFlow, IPFIX, sFlow, and SNMP with data platforms like Kafka.
- [Estuary](https://docs.warpstream.com/warpstream/reference/integrations/estuary.md): Estuary allows you to build real-time ETL/ELT data pipelines between various platforms supported by an array of connectors.
- [Fly.io](https://docs.warpstream.com/warpstream/reference/integrations/deploy-warpstream-to-fly.io.md): Instructions on how to deploy the WarpStream Agents to Fly.io
- [Imply](https://docs.warpstream.com/warpstream/reference/integrations/imply.md): This page describes how to integrate, ingest and query data in Imply from WarpStream. Imply is powered by Apache Druid, a real-time analytics database.
- [InfluxDB](https://docs.warpstream.com/warpstream/reference/integrations/influxdb.md): InfluxDB is an open-source time series database that is a perfect companion to WarpStream's Apache Kafka-compatible clusters.
- [Kafbat UI/kafka-ui](https://docs.warpstream.com/warpstream/reference/integrations/kafbat-ui-kafka-ui.md): Kafbat UI and kafka-ui are web UIs for managing and monitoring Kafka clusters.
- [Kestra](https://docs.warpstream.com/warpstream/reference/integrations/kestra.md): This page describes how to integrate WarpStream with Kestra. Kestra is an event-driven data orchestration platform with a UI and command-line interface.
- [Materialize](https://docs.warpstream.com/warpstream/reference/integrations/materialize.md): This page describes how to set up a connection between WarpStream and Materialize, ingest data into Materialize, and create a materialized view of this data.
- [MinIO](https://docs.warpstream.com/warpstream/reference/integrations/minio.md): Instructions on integrating WarpStream with MinIO.
- [MirrorMaker](https://docs.warpstream.com/warpstream/reference/integrations/mirrormaker.md): This page describes the settings tweaks you may have to do to leverage MirrorMaker to migrate to Warpstream.
- [MotherDuck](https://docs.warpstream.com/warpstream/reference/integrations/motherduck.md): MotherDuck provides cloud-based, serverless access to DuckDB
- [ngrok](https://docs.warpstream.com/warpstream/reference/integrations/ngrok.md): How to leverage ngrok for testing WarpStream Agents running locally.
- [Ockam](https://docs.warpstream.com/warpstream/reference/integrations/ockam.md): Instructions on integrating WarpStream with Ockam.
- [OpenTelemetry Collector](https://docs.warpstream.com/warpstream/reference/integrations/opentelemetry-collector.md): This page describes how to connect the OpenTelemetry Collector to WarpStream using the Kafka exporter.
- [Parquet](https://docs.warpstream.com/warpstream/reference/integrations/sqlite.md): Apache Parquet is an open-source, column-oriented data file format designed for efficient data storage and retrieval. It forms the backbone of many datalake and table format systems.
- [Quix Streams](https://docs.warpstream.com/warpstream/reference/integrations/quix-streams.md): This page describes how to use the Quix Streams Python library to read and aggregate data from WarpStream and ingest the aggregations into a local DuckDB database for offline querying.
- [Railway](https://docs.warpstream.com/warpstream/reference/integrations/railway.md): Instructions on how to deploy the WarpStream Agents to Railway.
- [Redpanda Console](https://docs.warpstream.com/warpstream/reference/integrations/redpanda-console.md): Redpanda Console is a web application that helps you manage and debug your Kafka workloads, as well as Kafka protocol-compatible systems such as Redpanda and WarpStream.
- [RisingWave](https://docs.warpstream.com/warpstream/reference/integrations/use-warpstream-with-risingwave.md): Instructions on how to use WarpStream with RisingWave.
- [Rockset](https://docs.warpstream.com/warpstream/reference/integrations/rockset.md): This page describes how to integrate WarpStream with Rockset, ingest data into Rockset from WarpStream, and query the data in Rockset.
- [ShadowTraffic](https://docs.warpstream.com/warpstream/reference/integrations/shadowtraffic.md): ShadowTraffic is a containerized service for declaratively generating data, packed with knobs to perfectly mimic your production traffic to Kafka-compatible, and other destinations.
- [SQLite](https://docs.warpstream.com/warpstream/reference/integrations/sqlite-1.md): SQLite is an open-source, embedded, serverless RDBMS that is popular for its small size and ease of use.
- [Streambased](https://docs.warpstream.com/warpstream/reference/integrations/streambased.md): Instructions on integrating WarpStream with Streambased.
- [Streamlit](https://docs.warpstream.com/warpstream/reference/integrations/streamlit.md): Streamlit is a Python library that enables the simple creation of web apps.
- [Timeplus](https://docs.warpstream.com/warpstream/reference/integrations/timeplus.md): This page describes how to integrate WarpStream with Timeplus, to perform SQL commands on a data stream to perform queries, transformations and ETL.
- [Tinybird](https://docs.warpstream.com/warpstream/reference/integrations/tinybird.md): This page describes how to integrate WarpStream with Tinybird, ingest data into Tinybird from WarpStream, and then create an API endpoint for your applications to access the result set.
- [Upsolver](https://docs.warpstream.com/warpstream/reference/integrations/upsolver.md): This page describes how to integrate WarpStream with Upsolver, ingest data into Upsolver from WarpStream, then process and write the data to one of the available targets in Upsolver.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.warpstream.com/warpstream/reference/integrations.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
