# Integrations

- [Arroyo](/warpstream/reference/integrations/arroyo.md): This page describes how to integrate WarpStream with Arroyo, a distributed stream processing engine written in Rust, that is designed to efficiently perform statement computations on streams of data.
- [AWS Lambda Triggers](/warpstream/reference/integrations/aws-lambda-triggers.md): This page explains how to use WarpStream with AWS's Self Managed Apache Kafka Trigger
- [ClickHouse](/warpstream/reference/integrations/clickhouse.md): This page describes how to integrate WarpStream with ClickHouse, ingest data from WarpStream into ClickHouse, and query the data in ClickHouse.
- [Debezium](/warpstream/reference/integrations/use-warpstream-with-debezium.md): Instructions on how to use WarpStream with Debezium.
- [Decodable](/warpstream/reference/integrations/decodable.md): This page describes how to integrate WarpStream with Decodable, ingest data into Decodable from WarpStream, and query the data in Decodable.
- [DeltaStream](/warpstream/reference/integrations/deltastream.md): Learn how to connect DeltaStream and WarpStream.
- [docker-compose](/warpstream/reference/integrations/use-the-agent-in-docker-compose.md): This page describes how to configure the WarpStream Agent's so they run in docker-compose.
- [DuckDB](/warpstream/reference/integrations/duckdb.md): DuckDB is an open-source, column-oriented, relational database management system (RDBMS) designed for analytical processing and interactive querying.
- [ElastiFlow](/warpstream/reference/integrations/elastiflow.md): ElastiFlow is a network flow and SNMP analytics/monitoring solution that enables insights into network performance and security using NetFlow, IPFIX, sFlow, and SNMP with data platforms like Kafka.
- [Estuary](/warpstream/reference/integrations/estuary.md): Estuary allows you to build real-time ETL/ELT data pipelines between various platforms supported by an array of connectors.
- [Fly.io](/warpstream/reference/integrations/deploy-warpstream-to-fly.io.md): Instructions on how to deploy the WarpStream Agents to Fly.io
- [Imply](/warpstream/reference/integrations/imply.md): This page describes how to integrate, ingest and query data in Imply from WarpStream. Imply is powered by Apache Druid, a real-time analytics database.
- [InfluxDB](/warpstream/reference/integrations/influxdb.md): InfluxDB is an open-source time series database that is a perfect companion to WarpStream's Apache Kafka-compatible clusters.
- [Kafbat UI/kafka-ui](/warpstream/reference/integrations/kafbat-ui-kafka-ui.md): Kafbat UI and kafka-ui are web UIs for managing and monitoring Kafka clusters.
- [Kestra](/warpstream/reference/integrations/kestra.md): This page describes how to integrate WarpStream with Kestra. Kestra is an event-driven data orchestration platform with a UI and command-line interface.
- [Materialize](/warpstream/reference/integrations/materialize.md): This page describes how to set up a connection between WarpStream and Materialize, ingest data into Materialize, and create a materialized view of this data.
- [MinIO](/warpstream/reference/integrations/minio.md): Instructions on integrating WarpStream with MinIO.
- [MirrorMaker](/warpstream/reference/integrations/mirrormaker.md): This page describes the settings tweaks you may have to do to leverage MirrorMaker to migrate to Warpstream.
- [MotherDuck](/warpstream/reference/integrations/motherduck.md): MotherDuck provides cloud-based, serverless access to DuckDB
- [ngrok](/warpstream/reference/integrations/ngrok.md): How to leverage ngrok for testing WarpStream Agents running locally.
- [Ockam](/warpstream/reference/integrations/ockam.md): Instructions on integrating WarpStream with Ockam.
- [OpenTelemetry Collector](/warpstream/reference/integrations/opentelemetry-collector.md): This page describes how to connect the OpenTelemetry Collector to WarpStream using the Kafka exporter.
- [Parquet](/warpstream/reference/integrations/sqlite.md): Apache Parquet is an open-source, column-oriented data file format designed for efficient data storage and retrieval. It forms the backbone of many datalake and table format systems.
- [Quix Streams](/warpstream/reference/integrations/quix-streams.md): This page describes how to use the Quix Streams Python library to read and aggregate data from WarpStream and ingest the aggregations into a local DuckDB database for offline querying.
- [Railway](/warpstream/reference/integrations/railway.md): Instructions on how to deploy the WarpStream Agents to Railway.
- [Redpanda Console](/warpstream/reference/integrations/redpanda-console.md): Redpanda Console is a web application that helps you manage and debug your Kafka workloads, as well as Kafka protocol-compatible systems such as Redpanda and WarpStream.
- [RisingWave](/warpstream/reference/integrations/use-warpstream-with-risingwave.md): Instructions on how to use WarpStream with RisingWave.
- [Rockset](/warpstream/reference/integrations/rockset.md): This page describes how to integrate WarpStream with Rockset, ingest data into Rockset from WarpStream, and query the data in Rockset.
- [ShadowTraffic](/warpstream/reference/integrations/shadowtraffic.md): ShadowTraffic is a containerized service for declaratively generating data, packed with knobs to perfectly mimic your production traffic to Kafka-compatible, and other destinations.
- [SQLite](/warpstream/reference/integrations/sqlite-1.md): SQLite is an open-source, embedded, serverless RDBMS that is popular for its small size and ease of use.
- [Streambased](/warpstream/reference/integrations/streambased.md): Instructions on integrating WarpStream with Streambased.
- [Streamlit](/warpstream/reference/integrations/streamlit.md): Streamlit is a Python library that enables the simple creation of web apps.
- [Timeplus](/warpstream/reference/integrations/timeplus.md): This page describes how to integrate WarpStream with Timeplus, to perform SQL commands on a data stream to perform queries, transformations and ETL.
- [Tinybird](/warpstream/reference/integrations/tinybird.md): This page describes how to integrate WarpStream with Tinybird, ingest data into Tinybird from WarpStream, and then create an API endpoint for your applications to access the result set.
- [Upsolver](/warpstream/reference/integrations/upsolver.md): This page describes how to integrate WarpStream with Upsolver, ingest data into Upsolver from WarpStream, then process and write the data to one of the available targets in Upsolver.
