* added empty state text * update permission group in the dataset * Enhance dataset asset listing with organization-specific filtering - Updated the `list_assets` function to include organization ID filtering in dataset permissions queries. - Removed redundant organization ID filters from the dataset permissions queries to streamline the logic. - Ensured that only relevant dataset assets are returned based on the user's organization, improving data security and relevance. These changes enhance the API's ability to serve organization-specific data, aligning with recent improvements in dataset asset APIs. * containerized class should be white with no border at bottom * clear query when signing out * Use correct endpoint for dataset groups * yaml syntax highligting * create dataset endpoints * update disable logic for deploying a dataset * Refactor user routes to include new endpoint for retrieving user by ID - Removed the public modifier from `get_user` and `update_user` modules to encapsulate them within the module. - Added a new route to the user router for fetching a user by their ID, enhancing the API's functionality. - This change improves the user management capabilities by allowing retrieval of specific user details based on their unique identifier. * Add organizations module and integrate with user routes * remove unused imports and abstract variables * Refactor user update functionality to support role changes - Enhanced the `update_user` endpoint to accept and process user role updates alongside name changes. - Introduced a new `UserResponse` struct for improved response handling. - Updated the `update_user_handler` to handle changes in both user name and organization role, improving the flexibility of user management. - Adjusted response type to return no content upon successful updates, aligning with RESTful practices. These changes enhance the user management capabilities by allowing for more comprehensive updates to user information. * Update user route to use ID parameter for updates - Changed the user update route to require a user ID in the URL, enhancing RESTful practices. - Updated the `update_user` function to extract the user ID from the path, ensuring the correct user is updated based on the provided ID. These changes improve the clarity and functionality of the user update endpoint, aligning it with standard REST conventions. * simplify hooks imports * Remove unused component * restructure folders for layout * update imports for gloabl components * add additional routes * Implement user permission checks in dataset deployment and user update routes - Added permission validation to the `deploy_datasets` and `post_dataset` functions to ensure only users with workspace admin or data admin roles can execute these actions. - Enhanced error handling for permission checks, returning appropriate HTTP status codes and messages for insufficient permissions and internal errors. - Updated imports to include the new security checks module for consistency across routes. These changes improve security by enforcing role-based access control in critical dataset operations. * Refactor user update route to enhance RESTful practices - Updated the user update route to require a user ID in the URL, ensuring the correct user is updated based on the provided ID. - Improved clarity and functionality of the `update_user` function by extracting the user ID from the path. These changes align the user update endpoint with standard REST conventions, enhancing overall API usability. * Enhance dataset listing functionality with user organization roles - Refactored dataset listing logic to incorporate user organization roles, allowing for more granular access control based on user permissions. - Introduced new role checks for `WorkspaceAdmin`, `DataAdmin`, `Querier`, `RestrictedQuerier`, and `Viewer` to determine dataset visibility. - Updated database queries to fetch datasets based on user roles and organization associations, improving data retrieval efficiency. - Removed deprecated functions and streamlined the dataset fetching process, ensuring clarity and maintainability in the codebase. These changes improve the API's security and usability by enforcing role-based access control for dataset operations. * tweaked the post thread permissions handle. * permission_group string fix * remove package.json * fix: Add release please syntax handler and github action (#40) * fix(buster): Add release please syntax handler and github action * chore: add version tracking setup fix: update update nate rulez --------- Co-authored-by: dal <dallin@buster.so> |
||
---|---|---|
.github | ||
api | ||
assets | ||
cli | ||
ee | ||
supabase | ||
warehouse | ||
web | ||
.DS_Store | ||
.env.example | ||
.gitignore | ||
CHANGELOG.md | ||
Dockerfile | ||
LICENSE | ||
README.md | ||
SECURITY.md | ||
docker-compose.yml | ||
start.sh | ||
version.txt |
README.md
The Buster Platform
A modern analytics platform for AI-powered data applications
What is Buster?
Buster is a modern analytics platform built from the ground up with AI in mind.
We've spent the last two years working with companies to help them implement Large Language Models in their data stack. This has mainly revolved around truly self-serve experiences that are powered by Large Language Models. We've noticed a few pain points when it comes to the tools that are available today:
- Slapping an AI copilot on top of existing BI tools can often result in a subpar experience for users. To deploy a powerful analytics experience, we believe that the entire app needs to be built from the ground up with AI in mind.
- Most organizations can't deploy ad-hoc, self-serve experiences for their users because their warehousing costs/performance are too prohibitive. We believe that new storage formats like Apache Iceberg and query engines like Starrocks and DuckDB have the potential to change data warehousing and make it more accessible for the type of workloads that come with AI-powered analytics experiences.
- The current CI/CD process for most analytics stacks struggle to keep up with changes and often result in broken dashboards, slow query performance, and other issues. Introducing hundreds, if not thousands of user queries made with Large Language Models can exacerbate these issues and make it nearly impossible to maintain. We believe there is a huge opportunity to rethink how Large Language Models can be used to improve this process with workflows around self-healing, model suggestions, and more.
- Current tools don't have tooling or workflows built around augmenting data teams. They are designed for the analyst to continue working as they did before, instead of helping them build powerful data experiences for their users. We believe that instead of spending hours and hours building out unfulfilling dashboards, data teams should be empowered to build out powerful, self-serve experiences for their users.
Ultimately, we believe that the future of AI analytics is about helping data teams build powerful, self-serve experiences for their users. We think that requires a new approach to the analytics stack. One that allows for deep integrations between products and allows data teams to truly own their entire experience.
Roadmap
Currently, we are in the process of open-sourcing the platform. This includes:
- Warehouse ✅
- BI platform ⏰
After that, we will release an official roadmap.
How We Plan to Make Money
Currently, we offer a few commercial products:
- Cloud-Hosted Versions
- Warehouse
- Cluster
- Serverless
- BI Platform
- Warehouse
- Managed Self-Hosted Version of the Warehouse product.
Support and feedback
You can contact us through either:
- Github Discussions
- Email us at founders at buster dot com
License
This repository is MIT licensed, except for the ee
folders. See LICENSE for more details.