ParadeDB
ParadeDB is an Elasticsearch alternative built on Postgres for real-time search and analytics. It is supported on all officially supported Postgres versions and ships by default with Postgres 16.
Last updated
ParadeDB is an Elasticsearch alternative built on Postgres for real-time search and analytics. It is supported on all officially supported Postgres versions and ships by default with Postgres 16.
Last updated
A video walkthrough can be found below:
WarpStream is Apache Kafka compatible, and ParadeDB is a Postgres extension. There is no direct connection between Kafka and Postgres, but it can be accomplished with a number of third-party tools, some more complex and bloated than others. For this illustration, we will be using the open-source pipeline tool Bento. Bento is written in Go and is simple to install and use as a single binary. The pipeline scripts are written in YAML, which we will cover.
Have Bento installed (covered below).
Have ParadeDB installed (covered below).
WarpStream account - get access to WarpStream by registering here.
WarpStream credentials.
A Serverless WarpStream cluster is up and running with a populated topic.
Bento is the open-source pipeline tool we will use to read from WarpStream and write to Postgres. It can be installed from source, binary, or as a docker container. Visit GitHub for the best instructions for your situation.
ParadeDB can be installed and started with the following Docker command:
docker run --name paradedb -e POSTGRES_PASSWORD=password -p 5432:5432 paradedb/paradedb
This script expects that you have exported your WarpStream credentials as environment variables. WarpStream will provide this command when you create a credential:
The following Bento script will perform the following actions:
Configure the connection information to WarpStream.
Set the consumer to bento_kafka
.
Read from the topic products
.
The driver
field tells Bento that this will be Postgres.
The dsn
configures the connection information to Postgres. Modify the names and passwords as needed for your environment.
The script defines the table layout that we will be getting from the JSON files in the topics, and creates the table if it doesn't exist, then writes the data. This could be further abstracted to dynamically get the field names from the JSON and create them in the table.
There is a commented block that will provide debugging information to the terminal.
Once run, the script will continue until stopped with ctrl+c
.
Note: This script can easily be modified to handle multiple topics; just add them as separate lines after - products
and - check
for them under cases:
From the command line, Bento can be run as follows:
bento -c myscript.yaml
Once you have data in ParadeDB, You can connect to ParadeDB with the following Docker command:
docker exec -it paradedb psql -U postgres -p 5432
We'll include some basic commands here based on the sample data described above. For thorough documentation on ParadeDB, refer to their website.
First, create a BM25 index. Bm25 significantly improves the ranking capabilities of native Postgres FTS, which uses TF-IDF and does not maintain statistics for word frequencies across the entire word corpus.
If this simple query works, then it means the index was created successfully.
Congratulations! You now know how to use Bento to create powerful pipelines from a WarpStream cluster to ParadeDB and make use of their advanced indexing.