Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> |
||
---|---|---|
.. | ||
helm_values | ||
python | ||
terraform | ||
README.md | ||
docker-compose.yml |
README.md
Buster Warehouse Overview
In working with our customers, we found that Snowflake, Bigquery, and other warehouse solutions were prohibitively expensive or slow in them being able to deploy AI-powered analytics at scale.
Additionaly, we found that having a close integration between the data warehouse and our AI-native BI tool allows for a better and more reliable data experience.
Key Features
- Built on Starrocks: We felt that Starrock was the best query engine by default for our use case. The main thing that pushed us towards it was that they perform predicate pushdown on iceberg tables, whereas Clickhouse and DuckDB do not. We were also impressed by the performance, caching system, and flexibility of Starrocks.
- Built on Apache Iceberg: Some of the top companies in the world use Apache Iceberg for storing and interacting with their data. We wanted a table format that not only brought tremendous benefits, but one that companies wouldn't outgrow.
- Bring Your Own Storage: We felt that customers should own their data and not be locked into a particular storage engine.
Quickstart
-
Dependencies:
- Make sure that you have Docker Engine installed.
- Install Python if you haven't already.
- Install a MySQL client on your system.
- An AWS account with S3 access.
-
Clone the repository:
git clone https://github.com/buster-so/warehouse.git
- Run the warehouse:
docker compose up -d
-
Populate the
.env
file with AWS credentials provisioned for S3 access. Note: You can use any S3 compatible storage, you might just need to tweak some of the configs. Feel free to look at the Starrocks docs or PyIceberg docs for more information. -
Connect to the warehouse with any MySQL client.
-
Create the external catalog:
CREATE EXTERNAL CATALOG 'public'
PROPERTIES
(
"type"="iceberg",
"iceberg.catalog.type"="rest",
"iceberg.catalog.uri"="http://iceberg-rest:8181",
"iceberg.catalog.warehouse"="<BUCKET_NAME>",
"aws.s3.access_key"="<ACCESS_KEY>",
"aws.s3.secret_key"="<SECRET_KEY>",
"aws.s3.region" = "<REGION>",
"aws.s3.enable_path_style_access"="true",
"client.factory"="com.starrocks.connector.iceberg.IcebergAwsClientFactory"
);
-
Seed the data. If you want to populate a table with 75m records, you can run the notebook found here.
-
Set the catalog
SET CATALOG 'public';
- Set the database
USE DATABASE 'public';
- Run a query
SELECT COUNT(*) FROM public.nyc_taxi;
Optimizations
For data that you think will be accessed frequently, you can cache it on disk for faster access with:
CACHE SELECT * FROM public.nyc_taxi WHERE tpep_pickup_datetime > '2022-03-01';
Deployment on AWS
WIP
Shoutouts
The documentation from the Starrocks, Iceberg, and PyIceberg team has been very helpful in building this project.