Buster is an open-source platform for deploying AI data analysts
Go to file
github-actions[bot] b51e222414 chore(release): update version to 0.0.66 2025-02-26 14:52:45 +00:00
.github Merge pull request #153 from buster-so/staging 2025-02-26 06:52:30 -08:00
api bigquery support 2025-02-26 07:45:22 -07:00
assets docker compose licenses and readme 2024-09-10 17:09:57 -06:00
cli ok better version handling 2025-02-26 07:51:04 -07:00
ee docker compose licenses and readme 2024-09-10 17:09:57 -06:00
logs feat: enhance dataset validation and deployment error handling 2025-02-05 17:04:13 -07:00
supabase Pass env to dockerfiles 2025-01-08 14:03:27 -07:00
warehouse Update warehouse/README.md 2025-02-12 12:13:14 -08:00
web Update useDropzonesInternal.ts 2025-02-18 16:17:50 -07:00
.DS_Store check api key on init 2024-11-25 15:13:46 -07:00
.env.example Pass env to dockerfiles 2025-01-08 14:03:27 -07:00
.gitignore chore: update .gitignore to exclude /prds directory 2025-02-04 16:52:54 -07:00
.release-please-config.json fix: update README 2025-01-16 14:37:32 -07:00
CHANGELOG.md chore: add semvar version update (#42) 2025-01-16 11:02:12 -08:00
Dockerfile Refactor Docker Compose setup and clean up REST router 2025-01-07 08:32:15 -07:00
LICENSE docker compose licenses and readme 2024-09-10 17:09:57 -06:00
README.md fix: update README 2025-01-16 14:37:32 -07:00
SECURITY.md Create SECURITY.md 2024-12-04 19:21:14 -08:00
docker-compose.yml Pass env to dockerfiles 2025-01-08 14:03:27 -07:00
start.sh Refactor Docker Compose and API configuration for improved service management 2025-01-07 16:20:59 -07:00
version.txt chore(release): update version to 0.0.66 2025-02-26 14:52:45 +00:00

README.md

Buster GitHub Banner

The Buster Platform

A modern analytics platform for AI-powered data applications


What is Buster?

Buster is a modern analytics platform built from the ground up with AI in mind.

We've spent the last two years working with companies to help them implement Large Language Models in their data stack. This has mainly revolved around truly self-serve experiences that are powered by Large Language Models. We've noticed a few pain points when it comes to the tools that are available today:

  1. Slapping an AI copilot on top of existing BI tools can often result in a subpar experience for users. To deploy a powerful analytics experience, we believe that the entire app needs to be built from the ground up with AI in mind.
  2. Most organizations can't deploy ad-hoc, self-serve experiences for their users because their warehousing costs/performance are too prohibitive. We believe that new storage formats like Apache Iceberg and query engines like Starrocks and DuckDB have the potential to change data warehousing and make it more accessible for the type of workloads that come with AI-powered analytics experiences.
  3. The current CI/CD process for most analytics stacks struggle to keep up with changes and often result in broken dashboards, slow query performance, and other issues. Introducing hundreds, if not thousands of user queries made with Large Language Models can exacerbate these issues and make it nearly impossible to maintain. We believe there is a huge opportunity to rethink how Large Language Models can be used to improve this process with workflows around self-healing, model suggestions, and more.
  4. Current tools don't have tooling or workflows built around augmenting data teams. They are designed for the analyst to continue working as they did before, instead of helping them build powerful data experiences for their users. We believe that instead of spending hours and hours building out unfulfilling dashboards, data teams should be empowered to build out powerful, self-serve experiences for their users.

Ultimately, we believe that the future of AI analytics is about helping data teams build powerful, self-serve experiences for their users. We think that requires a new approach to the analytics stack. One that allows for deep integrations between products and allows data teams to truly own their entire experience.

Roadmap

Currently, we are in the process of open-sourcing the platform. This includes:

After that, we will release an official roadmap.

How We Plan to Make Money

Currently, we offer a few commercial products:

  • Cloud-Hosted Versions
    • Warehouse
      • Cluster
      • Serverless
    • BI Platform
  • Managed Self-Hosted Version of the Warehouse product.

Support and feedback

You can contact us through either:

License

This repository is MIT licensed, except for the ee folders. See LICENSE for more details.