Deployment (#397)

* Mastra braintrust (#391)

* type fixes

* biome clean on ai

* add user to flag chat

* attempt to get vercel deployed

* Update tsup.config.ts

* Update pnpm-lock.yaml

* Add @buster/server2 Hono API app with Vercel deployment configuration

* slack oauth integration

* mainly some clean up and biome formatting

* slack oauth

* slack migration + snapshot

* remove unused files

* finalized docker image for porter

* Create porter_app_buster-server_3155.yml file

* Add integration tests for Slack handler and refactor Slack OAuth service

- Introduced integration tests for the Slack handler, covering OAuth initiation, callback handling, and integration status retrieval.
- Refactored Slack OAuth service to improve error handling and ensure proper integration state management.
- Updated token storage implementation to use a database vault instead of Supabase.
- Enhanced existing tests for better coverage and reliability, including cleanup of test data.
- Added new utility functions for managing vault secrets in the database.

* docker image update

* new prompts

* individual tests and a schema fix

* server build

* final working dockerfile

* Update Dockerfile

* new messages to slack messages (#369)

* Update dockerfile

* Update validate-env.js

* update build pipeline

* Update the dockerfile flow

* finalize logging for pino

* stable base

* Update cors middleware logger

* Update cors.ts

* update docker to be more imformative

* Update index.ts

* Update auth.ts

* Update cors.ts

* Update cors.ts

* Update logger.ts

* remove logs

* more cors updates

* build server shared

* Refactor PostgreSQL credentials handling and remove unused memory storage. Update package dependencies. (#370)

* tons of file parsing errors (#371)

* Refactor PostgreSQL credentials handling and remove unused memory storage. Update package dependencies.

* tons of file parsing errors

* Dev mode updates

* more stable electric handler

* Dal/agent-self-healing-fixes (#372)

* change to 6 min

* optmizations around saving and non-blocking actions.

* stream optimizations

* Dal/agent-self-healing-fixes (#373)

* change to 6 min

* optmizations around saving and non-blocking actions.

* stream optimizations

* change porter staging deploy to mastra-braintrust.

* new path for porter deploy

* deploy to staging fix

* Create porter_app_mastra-braintrust-api_3155.yml file (#375)

Co-authored-by: porter-deployment-app[bot] <87230664+porter-deployment-app[bot]@users.noreply.github.com>

* Update sizing and opacity

* supe up the instance for mastra

* environment staging

* ssl script

* copy path

* Update list padding

* no throttle and the anthropic cached

* move select to the top

* Update margin inline start

* shrink reasoning vertical space to 2px

* semi bold font for headers

* update animation timing

* haiku

* Add createTodoList tool and integrate into create-todos-step

* chat helper on post chat

* only trigger cicd when change made

* Start created streaming text components

* Refactor analyst agent task to initialize Braintrust logging asynchronously and parallelize database queries for improved performance. Adjusted cleanup timeout for Braintrust traces to reduce delays.

* fixed reasoned for X, so that it rounds down to the minute

* Update users page

* update build pipeline for new web

* document title update

* Named chats for page

* Datasets titles

* Refactor visualization tools and enhance error handling in retryable agent stream. Removed unused metricValueLabel from metrics file tool, updated metric configuration schemas, and improved healing mechanism for tool errors during streaming.

* analyst

* document title updates

* Update useDocumentTitle.tsx

* Refactor tool choice configuration in create-todos-step to use structured object. Remove exponential backoff logic from retryable agent stream for healable errors. Introduce new test for real-world healing scenarios in retryable agent stream.

* Refactor SQL validation logic in modify-metrics-file-tool to skip unnecessary checks when SQL has not changed. Enhance error handling and update validation messages. Clean up code formatting for improved readability.

* update collapse for filecard

* chevron collapse

* Jacob prompt changes (#376)

* prompt changes to improve filtering logic and handle priv/sec errors

* prompt changes to make aggregation better and improved filter best practices

* Update packages/ai/src/steps/create-todos-step.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update packages/ai/src/agents/think-and-prep-agent/think-and-prep-instructions.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update packages/ai/src/steps/create-todos-step.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

---------

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>
Co-authored-by: dal <dallin@buster.so>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* think and prep

* change header and strong fonts weights

* Update get collection

* combo chart x axis update

* Create a chart schemas as types

* schema types

* simple unit tests for line chart props

* fix the response file ordering iwth active selection.

* copy around reasoning messages taken care of

* fix nullable user message and file processing and such.

* update ticks for chart config

* fix todo parsing.

* app markdown update

* Update splitter to use border instead of width

* change ml

* If no file is found we should auto redirect

* Refactor database connection handling to support SSL modes. Introduced functions to extract SSL parameters and manage connections based on SSL requirements, including a custom verifier for unverified connections.

* black box message update

* chat title updates

* optimizations for trigger.

* some keepalive logic on the anthropic cached

* keep title empty until new one

* no duplicate messages

* null user message on asset pull

* posthog error handling

* 20 sec idle timeout on anthropic

* null req message

* fixed modificiation names missing

* Refactor tool call handling to support new content array format in asset messages and context loaders

* cache most recent file from workflow

* Enhance date and number detection in createDataMetadata function to improve data type handling for metrics files

* group hover effect for message

* logging for chat

* Add messageId handling and file association tracking in dashboard and metrics tools

- Updated runtime context to include messageId in create and modify dashboard and metrics file tools.
- Implemented file association tracking based on messageId in create and modify functions for both dashboards and metrics.
- Ensured type consistency by using AnalystRuntimeContext in runtime context parameters.

* logging for chat

* message type update

* Route to first file instead

* trigger moved to catalog

* Enhance file selection logic to support YAML parsing and improve logging

- Updated `extractMetricIdsFromDashboard` to first attempt JSON parsing, falling back to a regex-based YAML parsing for metric IDs.
- Added detailed debug logging in `selectFilesForResponse` to track file selection process, including metrics and dashboards involved.
- Introduced tests for various scenarios in `file-selection.test.ts` to ensure correct behavior with dashboard context and edge cases.

* trigger dev v4-beta

* Retry + Self Healing (#381)

* Refactor retry logic in analyst and think-and-prep steps

Co-authored-by: dallin <dallin@buster.so>

* some fixes

* console log error

* self healing

* todos retry

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>

* remove lots of logs

* Remove chat streaming

* Remove chat streaming

* timeout

* Change to updated at field

* link to home

* Update timeout settings for HTTP and HTTPS agents from 20 seconds to 10 seconds for improved responsiveness.

* Add utils module and integrate message conversion in post_chat_handler

* Implement error handling for extract values (#382)

* Remove chat streaming

* Improve error handling and logging in extract values and chat title steps

Co-authored-by: dallin <dallin@buster.so>

---------

Co-authored-by: Nate Kelley <nate@buster.so>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>

* loading icon for buster avatar

* finalize tooltip cache

* upgrade mastra

* increase retries

* Add redo functionality for chat messages

- Introduced `redoFromMessageId` parameter in `handleExistingChat` to allow users to specify a message to redo from.
- Implemented validation to ensure the specified message belongs to the current chat.
- Added `softDeleteMessagesFromPoint` function to soft delete a message and all subsequent messages in the same chat, facilitating the redo feature.

* fix electric potential memory leak

* tooltip cache and chart cleanup

* Update bullet to be more indented

* latest version number

* add support endpoint to new server

* Fix jank in combo bar charts

* index check for dashboard

* Collapse only if there are metrics

* Is finished reasoing back

* Update dependencies and enhance chat message handling

- Upgraded `@mastra/core` to version 0.10.8 and added `node-sql-parser` at version 5.3.10 in the lock file.
- Improved integration tests for chat message redo functionality, ensuring correct behavior when deriving `chat_id` from `message_id`.
- Enhanced error handling and validation in the `initializeChat` function to manage cases where `chat_id` is not provided.

* Update pnpm-lock and enhance chat message integration tests

- Added `node-sql-parser` version 5.3.10 to dependencies and updated the lock file.
- Improved integration tests for chat message redo functionality, ensuring accurate deletion and retrieval of messages.
- Enhanced the `initializeChat` function to derive `chat_id` from `message_id` when not provided, improving error handling and validation.

* remove .env import breaking build

* add updated at to the get chat handler

* zmall runtime error fix

* permission tests passing

* return updated at on the get chat handler now

* slq parser fixes

* Implement chat access control logic and add comprehensive tests

- Developed the `canUserAccessChat` function to determine user access to chats based on direct permissions, collection permissions, creator status, and organizational roles.
- Introduced helper functions for checking permissions and retrieving chat information.
- Added integration tests to validate access control logic, covering various scenarios including direct permissions, collection permissions, and user roles.
- Created unit tests to ensure the correctness of the access control function with mocked database interactions.
- Included simple integration tests to verify functionality with existing database data.

* sql parser and int tests working.

* fix test and lint issues

* comment to kick off deployment lo

* access controls on datasets

* electric context bug fix with sql helpers.

* permission and read only

* Add lru-cache dependency and export cache management functions

- Added `lru-cache` as a dependency in the access-controls package.
- Exported new cache management functions from `chats-cached` module, including `canUserAccessChatCached`, `getCacheStats`, `resetCacheStats`, `clearCache`, `invalidateAccess`, `invalidateUserAccess`, and `invalidateChatAccess`.

* packages deploy as well

* wrong workflow lol

* Update AppVerticalCodeSplitter.tsx

* Add error handling for query run and SQL save operations

Co-authored-by: natemkelley <natemkelley@gmail.com>

* Trim whitespace from input values before sending chat prompts

Co-authored-by: natemkelley <natemkelley@gmail.com>

* type in think-and-prep

* use the cached access chat

* update package version

* new asset import message

* Error fallback for login

* Update BusterChart.BarChart.stories.tsx

* Staging changes to fix number card titles, combo chart axis, and using dynamic filters (#386)

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>

* db init command pass through

* combo chart fixes (#387)

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>

* clarifying question and connection logic

* pino pretty error fix

* clarifying is a finishing tool

* change update latest version logic

* Update support endpoint

* fixes for horizontal bar charts and added the combo chart logic to update metrics (#388)

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>

* permission fix on dashboard metric handlers for workspace and data admin

* Add more try catches

* Hide avatar is no more

* Horizontal bar fixes (#389)

* fixes for horizontal bar charts and added the combo chart logic to update metrics

* hopefully fixed horizontal bar charts

---------

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>

* reasoning shimmer update

* Make the embed flow work with versions

* new account warning update

* Move support modal

* compact number for pie label

* Add final reasoning message tracking and workflow start time to chunk processor and related steps

- Introduced `finalReasoningMessage` to schemas in `analyst-step`, `mark-message-complete-step`, and `create-todos-step`.
- Updated `ChunkProcessor` to calculate and store the final reasoning message based on workflow duration.
- Enhanced various steps to utilize the new `workflowStartTime` for better tracking of execution duration.
- Improved database update logic to include `finalReasoningMessage` when applicable.

* 9 digit cutoff for pie

* trigger update

* test on mastra braintrust

* test deployment

* testing

* pnpm install

* pnpm

* node 22

* pnpm version

* trigger main

* get initial chat file

* hono main deploymenbt

* clear timeouts

* Remove console logs

* migration test to staging

* db url

* try again

* k get rid of tls var

* hmmm lets try this

* mark migrations

* fix migration file?

* drizzle-kit upgrade

* tweaks to the github actions

---------

Co-authored-by: Nate Kelley <nate@buster.so>
Co-authored-by: porter-deployment-app[bot] <87230664+porter-deployment-app[bot]@users.noreply.github.com>
Co-authored-by: Nate Kelley <133379588+nate-kelley-buster@users.noreply.github.com>
Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>
Co-authored-by: jacob-buster <jacob@buster.so>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: natemkelley <natemkelley@gmail.com>

* cert location copy moved (#392)

* biome clean on ai

* add user to flag chat

* attempt to get vercel deployed

* Update tsup.config.ts

* Update pnpm-lock.yaml

* Add @buster/server2 Hono API app with Vercel deployment configuration

* slack oauth integration

* mainly some clean up and biome formatting

* slack oauth

* slack migration + snapshot

* remove unused files

* finalized docker image for porter

* Create porter_app_buster-server_3155.yml file

* Add integration tests for Slack handler and refactor Slack OAuth service

- Introduced integration tests for the Slack handler, covering OAuth initiation, callback handling, and integration status retrieval.
- Refactored Slack OAuth service to improve error handling and ensure proper integration state management.
- Updated token storage implementation to use a database vault instead of Supabase.
- Enhanced existing tests for better coverage and reliability, including cleanup of test data.
- Added new utility functions for managing vault secrets in the database.

* docker image update

* new prompts

* individual tests and a schema fix

* server build

* final working dockerfile

* Update Dockerfile

* new messages to slack messages (#369)

* Update dockerfile

* Update validate-env.js

* update build pipeline

* Update the dockerfile flow

* finalize logging for pino

* stable base

* Update cors middleware logger

* Update cors.ts

* update docker to be more imformative

* Update index.ts

* Update auth.ts

* Update cors.ts

* Update cors.ts

* Update logger.ts

* remove logs

* more cors updates

* build server shared

* Refactor PostgreSQL credentials handling and remove unused memory storage. Update package dependencies. (#370)

* tons of file parsing errors (#371)

* Refactor PostgreSQL credentials handling and remove unused memory storage. Update package dependencies.

* tons of file parsing errors

* Dev mode updates

* more stable electric handler

* Dal/agent-self-healing-fixes (#372)

* change to 6 min

* optmizations around saving and non-blocking actions.

* stream optimizations

* Dal/agent-self-healing-fixes (#373)

* change to 6 min

* optmizations around saving and non-blocking actions.

* stream optimizations

* change porter staging deploy to mastra-braintrust.

* new path for porter deploy

* deploy to staging fix

* Create porter_app_mastra-braintrust-api_3155.yml file (#375)

Co-authored-by: porter-deployment-app[bot] <87230664+porter-deployment-app[bot]@users.noreply.github.com>

* Update sizing and opacity

* supe up the instance for mastra

* environment staging

* ssl script

* copy path

* Update list padding

* no throttle and the anthropic cached

* move select to the top

* Update margin inline start

* shrink reasoning vertical space to 2px

* semi bold font for headers

* update animation timing

* haiku

* Add createTodoList tool and integrate into create-todos-step

* chat helper on post chat

* only trigger cicd when change made

* Start created streaming text components

* Refactor analyst agent task to initialize Braintrust logging asynchronously and parallelize database queries for improved performance. Adjusted cleanup timeout for Braintrust traces to reduce delays.

* fixed reasoned for X, so that it rounds down to the minute

* Update users page

* update build pipeline for new web

* document title update

* Named chats for page

* Datasets titles

* Refactor visualization tools and enhance error handling in retryable agent stream. Removed unused metricValueLabel from metrics file tool, updated metric configuration schemas, and improved healing mechanism for tool errors during streaming.

* analyst

* document title updates

* Update useDocumentTitle.tsx

* Refactor tool choice configuration in create-todos-step to use structured object. Remove exponential backoff logic from retryable agent stream for healable errors. Introduce new test for real-world healing scenarios in retryable agent stream.

* Refactor SQL validation logic in modify-metrics-file-tool to skip unnecessary checks when SQL has not changed. Enhance error handling and update validation messages. Clean up code formatting for improved readability.

* update collapse for filecard

* chevron collapse

* Jacob prompt changes (#376)

* prompt changes to improve filtering logic and handle priv/sec errors

* prompt changes to make aggregation better and improved filter best practices

* Update packages/ai/src/steps/create-todos-step.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update packages/ai/src/agents/think-and-prep-agent/think-and-prep-instructions.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update packages/ai/src/steps/create-todos-step.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

---------

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>
Co-authored-by: dal <dallin@buster.so>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* think and prep

* change header and strong fonts weights

* Update get collection

* combo chart x axis update

* Create a chart schemas as types

* schema types

* simple unit tests for line chart props

* fix the response file ordering iwth active selection.

* copy around reasoning messages taken care of

* fix nullable user message and file processing and such.

* update ticks for chart config

* fix todo parsing.

* app markdown update

* Update splitter to use border instead of width

* change ml

* If no file is found we should auto redirect

* Refactor database connection handling to support SSL modes. Introduced functions to extract SSL parameters and manage connections based on SSL requirements, including a custom verifier for unverified connections.

* black box message update

* chat title updates

* optimizations for trigger.

* some keepalive logic on the anthropic cached

* keep title empty until new one

* no duplicate messages

* null user message on asset pull

* posthog error handling

* 20 sec idle timeout on anthropic

* null req message

* fixed modificiation names missing

* Refactor tool call handling to support new content array format in asset messages and context loaders

* cache most recent file from workflow

* Enhance date and number detection in createDataMetadata function to improve data type handling for metrics files

* group hover effect for message

* logging for chat

* Add messageId handling and file association tracking in dashboard and metrics tools

- Updated runtime context to include messageId in create and modify dashboard and metrics file tools.
- Implemented file association tracking based on messageId in create and modify functions for both dashboards and metrics.
- Ensured type consistency by using AnalystRuntimeContext in runtime context parameters.

* logging for chat

* message type update

* Route to first file instead

* trigger moved to catalog

* Enhance file selection logic to support YAML parsing and improve logging

- Updated `extractMetricIdsFromDashboard` to first attempt JSON parsing, falling back to a regex-based YAML parsing for metric IDs.
- Added detailed debug logging in `selectFilesForResponse` to track file selection process, including metrics and dashboards involved.
- Introduced tests for various scenarios in `file-selection.test.ts` to ensure correct behavior with dashboard context and edge cases.

* trigger dev v4-beta

* Retry + Self Healing (#381)

* Refactor retry logic in analyst and think-and-prep steps

Co-authored-by: dallin <dallin@buster.so>

* some fixes

* console log error

* self healing

* todos retry

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>

* remove lots of logs

* Remove chat streaming

* Remove chat streaming

* timeout

* Change to updated at field

* link to home

* Update timeout settings for HTTP and HTTPS agents from 20 seconds to 10 seconds for improved responsiveness.

* Add utils module and integrate message conversion in post_chat_handler

* Implement error handling for extract values (#382)

* Remove chat streaming

* Improve error handling and logging in extract values and chat title steps

Co-authored-by: dallin <dallin@buster.so>

---------

Co-authored-by: Nate Kelley <nate@buster.so>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>

* loading icon for buster avatar

* finalize tooltip cache

* upgrade mastra

* increase retries

* Add redo functionality for chat messages

- Introduced `redoFromMessageId` parameter in `handleExistingChat` to allow users to specify a message to redo from.
- Implemented validation to ensure the specified message belongs to the current chat.
- Added `softDeleteMessagesFromPoint` function to soft delete a message and all subsequent messages in the same chat, facilitating the redo feature.

* fix electric potential memory leak

* tooltip cache and chart cleanup

* Update bullet to be more indented

* latest version number

* add support endpoint to new server

* Fix jank in combo bar charts

* index check for dashboard

* Collapse only if there are metrics

* Is finished reasoing back

* Update dependencies and enhance chat message handling

- Upgraded `@mastra/core` to version 0.10.8 and added `node-sql-parser` at version 5.3.10 in the lock file.
- Improved integration tests for chat message redo functionality, ensuring correct behavior when deriving `chat_id` from `message_id`.
- Enhanced error handling and validation in the `initializeChat` function to manage cases where `chat_id` is not provided.

* Update pnpm-lock and enhance chat message integration tests

- Added `node-sql-parser` version 5.3.10 to dependencies and updated the lock file.
- Improved integration tests for chat message redo functionality, ensuring accurate deletion and retrieval of messages.
- Enhanced the `initializeChat` function to derive `chat_id` from `message_id` when not provided, improving error handling and validation.

* remove .env import breaking build

* add updated at to the get chat handler

* zmall runtime error fix

* permission tests passing

* return updated at on the get chat handler now

* slq parser fixes

* Implement chat access control logic and add comprehensive tests

- Developed the `canUserAccessChat` function to determine user access to chats based on direct permissions, collection permissions, creator status, and organizational roles.
- Introduced helper functions for checking permissions and retrieving chat information.
- Added integration tests to validate access control logic, covering various scenarios including direct permissions, collection permissions, and user roles.
- Created unit tests to ensure the correctness of the access control function with mocked database interactions.
- Included simple integration tests to verify functionality with existing database data.

* sql parser and int tests working.

* fix test and lint issues

* comment to kick off deployment lo

* access controls on datasets

* electric context bug fix with sql helpers.

* permission and read only

* Add lru-cache dependency and export cache management functions

- Added `lru-cache` as a dependency in the access-controls package.
- Exported new cache management functions from `chats-cached` module, including `canUserAccessChatCached`, `getCacheStats`, `resetCacheStats`, `clearCache`, `invalidateAccess`, `invalidateUserAccess`, and `invalidateChatAccess`.

* packages deploy as well

* wrong workflow lol

* Update AppVerticalCodeSplitter.tsx

* Add error handling for query run and SQL save operations

Co-authored-by: natemkelley <natemkelley@gmail.com>

* Trim whitespace from input values before sending chat prompts

Co-authored-by: natemkelley <natemkelley@gmail.com>

* type in think-and-prep

* use the cached access chat

* update package version

* new asset import message

* Error fallback for login

* Update BusterChart.BarChart.stories.tsx

* Staging changes to fix number card titles, combo chart axis, and using dynamic filters (#386)

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>

* db init command pass through

* combo chart fixes (#387)

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>

* clarifying question and connection logic

* pino pretty error fix

* clarifying is a finishing tool

* change update latest version logic

* Update support endpoint

* fixes for horizontal bar charts and added the combo chart logic to update metrics (#388)

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>

* permission fix on dashboard metric handlers for workspace and data admin

* Add more try catches

* Hide avatar is no more

* Horizontal bar fixes (#389)

* fixes for horizontal bar charts and added the combo chart logic to update metrics

* hopefully fixed horizontal bar charts

---------

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>

* reasoning shimmer update

* Make the embed flow work with versions

* new account warning update

* Move support modal

* compact number for pie label

* Add final reasoning message tracking and workflow start time to chunk processor and related steps

- Introduced `finalReasoningMessage` to schemas in `analyst-step`, `mark-message-complete-step`, and `create-todos-step`.
- Updated `ChunkProcessor` to calculate and store the final reasoning message based on workflow duration.
- Enhanced various steps to utilize the new `workflowStartTime` for better tracking of execution duration.
- Improved database update logic to include `finalReasoningMessage` when applicable.

* 9 digit cutoff for pie

* trigger update

* test on mastra braintrust

* test deployment

* testing

* pnpm install

* pnpm

* node 22

* pnpm version

* trigger main

* get initial chat file

* hono main deploymenbt

* clear timeouts

* Remove console logs

* migration test to staging

* db url

* try again

* k get rid of tls var

* hmmm lets try this

* mark migrations

* fix migration file?

* drizzle-kit upgrade

* tweaks to the github actions

* clean up workflows db migration

* cert location copy moved

---------

Co-authored-by: Nate Kelley <nate@buster.so>
Co-authored-by: porter-deployment-app[bot] <87230664+porter-deployment-app[bot]@users.noreply.github.com>
Co-authored-by: Nate Kelley <133379588+nate-kelley-buster@users.noreply.github.com>
Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>
Co-authored-by: jacob-buster <jacob@buster.so>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: natemkelley <natemkelley@gmail.com>

* deploy to staging (#393)

* Mastra braintrust (#394)

* Update tsup.config.ts

* Update pnpm-lock.yaml

* Add @buster/server2 Hono API app with Vercel deployment configuration

* slack oauth integration

* mainly some clean up and biome formatting

* slack oauth

* slack migration + snapshot

* remove unused files

* finalized docker image for porter

* Create porter_app_buster-server_3155.yml file

* Add integration tests for Slack handler and refactor Slack OAuth service

- Introduced integration tests for the Slack handler, covering OAuth initiation, callback handling, and integration status retrieval.
- Refactored Slack OAuth service to improve error handling and ensure proper integration state management.
- Updated token storage implementation to use a database vault instead of Supabase.
- Enhanced existing tests for better coverage and reliability, including cleanup of test data.
- Added new utility functions for managing vault secrets in the database.

* docker image update

* new prompts

* individual tests and a schema fix

* server build

* final working dockerfile

* Update Dockerfile

* new messages to slack messages (#369)

* Update dockerfile

* Update validate-env.js

* update build pipeline

* Update the dockerfile flow

* finalize logging for pino

* stable base

* Update cors middleware logger

* Update cors.ts

* update docker to be more imformative

* Update index.ts

* Update auth.ts

* Update cors.ts

* Update cors.ts

* Update logger.ts

* remove logs

* more cors updates

* build server shared

* Refactor PostgreSQL credentials handling and remove unused memory storage. Update package dependencies. (#370)

* tons of file parsing errors (#371)

* Refactor PostgreSQL credentials handling and remove unused memory storage. Update package dependencies.

* tons of file parsing errors

* Dev mode updates

* more stable electric handler

* Dal/agent-self-healing-fixes (#372)

* change to 6 min

* optmizations around saving and non-blocking actions.

* stream optimizations

* Dal/agent-self-healing-fixes (#373)

* change to 6 min

* optmizations around saving and non-blocking actions.

* stream optimizations

* change porter staging deploy to mastra-braintrust.

* new path for porter deploy

* deploy to staging fix

* Create porter_app_mastra-braintrust-api_3155.yml file (#375)

Co-authored-by: porter-deployment-app[bot] <87230664+porter-deployment-app[bot]@users.noreply.github.com>

* Update sizing and opacity

* supe up the instance for mastra

* environment staging

* ssl script

* copy path

* Update list padding

* no throttle and the anthropic cached

* move select to the top

* Update margin inline start

* shrink reasoning vertical space to 2px

* semi bold font for headers

* update animation timing

* haiku

* Add createTodoList tool and integrate into create-todos-step

* chat helper on post chat

* only trigger cicd when change made

* Start created streaming text components

* Refactor analyst agent task to initialize Braintrust logging asynchronously and parallelize database queries for improved performance. Adjusted cleanup timeout for Braintrust traces to reduce delays.

* fixed reasoned for X, so that it rounds down to the minute

* Update users page

* update build pipeline for new web

* document title update

* Named chats for page

* Datasets titles

* Refactor visualization tools and enhance error handling in retryable agent stream. Removed unused metricValueLabel from metrics file tool, updated metric configuration schemas, and improved healing mechanism for tool errors during streaming.

* analyst

* document title updates

* Update useDocumentTitle.tsx

* Refactor tool choice configuration in create-todos-step to use structured object. Remove exponential backoff logic from retryable agent stream for healable errors. Introduce new test for real-world healing scenarios in retryable agent stream.

* Refactor SQL validation logic in modify-metrics-file-tool to skip unnecessary checks when SQL has not changed. Enhance error handling and update validation messages. Clean up code formatting for improved readability.

* update collapse for filecard

* chevron collapse

* Jacob prompt changes (#376)

* prompt changes to improve filtering logic and handle priv/sec errors

* prompt changes to make aggregation better and improved filter best practices

* Update packages/ai/src/steps/create-todos-step.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update packages/ai/src/agents/think-and-prep-agent/think-and-prep-instructions.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update packages/ai/src/steps/create-todos-step.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

---------

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>
Co-authored-by: dal <dallin@buster.so>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* think and prep

* change header and strong fonts weights

* Update get collection

* combo chart x axis update

* Create a chart schemas as types

* schema types

* simple unit tests for line chart props

* fix the response file ordering iwth active selection.

* copy around reasoning messages taken care of

* fix nullable user message and file processing and such.

* update ticks for chart config

* fix todo parsing.

* app markdown update

* Update splitter to use border instead of width

* change ml

* If no file is found we should auto redirect

* Refactor database connection handling to support SSL modes. Introduced functions to extract SSL parameters and manage connections based on SSL requirements, including a custom verifier for unverified connections.

* black box message update

* chat title updates

* optimizations for trigger.

* some keepalive logic on the anthropic cached

* keep title empty until new one

* no duplicate messages

* null user message on asset pull

* posthog error handling

* 20 sec idle timeout on anthropic

* null req message

* fixed modificiation names missing

* Refactor tool call handling to support new content array format in asset messages and context loaders

* cache most recent file from workflow

* Enhance date and number detection in createDataMetadata function to improve data type handling for metrics files

* group hover effect for message

* logging for chat

* Add messageId handling and file association tracking in dashboard and metrics tools

- Updated runtime context to include messageId in create and modify dashboard and metrics file tools.
- Implemented file association tracking based on messageId in create and modify functions for both dashboards and metrics.
- Ensured type consistency by using AnalystRuntimeContext in runtime context parameters.

* logging for chat

* message type update

* Route to first file instead

* trigger moved to catalog

* Enhance file selection logic to support YAML parsing and improve logging

- Updated `extractMetricIdsFromDashboard` to first attempt JSON parsing, falling back to a regex-based YAML parsing for metric IDs.
- Added detailed debug logging in `selectFilesForResponse` to track file selection process, including metrics and dashboards involved.
- Introduced tests for various scenarios in `file-selection.test.ts` to ensure correct behavior with dashboard context and edge cases.

* trigger dev v4-beta

* Retry + Self Healing (#381)

* Refactor retry logic in analyst and think-and-prep steps

Co-authored-by: dallin <dallin@buster.so>

* some fixes

* console log error

* self healing

* todos retry

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>

* remove lots of logs

* Remove chat streaming

* Remove chat streaming

* timeout

* Change to updated at field

* link to home

* Update timeout settings for HTTP and HTTPS agents from 20 seconds to 10 seconds for improved responsiveness.

* Add utils module and integrate message conversion in post_chat_handler

* Implement error handling for extract values (#382)

* Remove chat streaming

* Improve error handling and logging in extract values and chat title steps

Co-authored-by: dallin <dallin@buster.so>

---------

Co-authored-by: Nate Kelley <nate@buster.so>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>

* loading icon for buster avatar

* finalize tooltip cache

* upgrade mastra

* increase retries

* Add redo functionality for chat messages

- Introduced `redoFromMessageId` parameter in `handleExistingChat` to allow users to specify a message to redo from.
- Implemented validation to ensure the specified message belongs to the current chat.
- Added `softDeleteMessagesFromPoint` function to soft delete a message and all subsequent messages in the same chat, facilitating the redo feature.

* fix electric potential memory leak

* tooltip cache and chart cleanup

* Update bullet to be more indented

* latest version number

* add support endpoint to new server

* Fix jank in combo bar charts

* index check for dashboard

* Collapse only if there are metrics

* Is finished reasoing back

* Update dependencies and enhance chat message handling

- Upgraded `@mastra/core` to version 0.10.8 and added `node-sql-parser` at version 5.3.10 in the lock file.
- Improved integration tests for chat message redo functionality, ensuring correct behavior when deriving `chat_id` from `message_id`.
- Enhanced error handling and validation in the `initializeChat` function to manage cases where `chat_id` is not provided.

* Update pnpm-lock and enhance chat message integration tests

- Added `node-sql-parser` version 5.3.10 to dependencies and updated the lock file.
- Improved integration tests for chat message redo functionality, ensuring accurate deletion and retrieval of messages.
- Enhanced the `initializeChat` function to derive `chat_id` from `message_id` when not provided, improving error handling and validation.

* remove .env import breaking build

* add updated at to the get chat handler

* zmall runtime error fix

* permission tests passing

* return updated at on the get chat handler now

* slq parser fixes

* Implement chat access control logic and add comprehensive tests

- Developed the `canUserAccessChat` function to determine user access to chats based on direct permissions, collection permissions, creator status, and organizational roles.
- Introduced helper functions for checking permissions and retrieving chat information.
- Added integration tests to validate access control logic, covering various scenarios including direct permissions, collection permissions, and user roles.
- Created unit tests to ensure the correctness of the access control function with mocked database interactions.
- Included simple integration tests to verify functionality with existing database data.

* sql parser and int tests working.

* fix test and lint issues

* comment to kick off deployment lo

* access controls on datasets

* electric context bug fix with sql helpers.

* permission and read only

* Add lru-cache dependency and export cache management functions

- Added `lru-cache` as a dependency in the access-controls package.
- Exported new cache management functions from `chats-cached` module, including `canUserAccessChatCached`, `getCacheStats`, `resetCacheStats`, `clearCache`, `invalidateAccess`, `invalidateUserAccess`, and `invalidateChatAccess`.

* packages deploy as well

* wrong workflow lol

* Update AppVerticalCodeSplitter.tsx

* Add error handling for query run and SQL save operations

Co-authored-by: natemkelley <natemkelley@gmail.com>

* Trim whitespace from input values before sending chat prompts

Co-authored-by: natemkelley <natemkelley@gmail.com>

* type in think-and-prep

* use the cached access chat

* update package version

* new asset import message

* Error fallback for login

* Update BusterChart.BarChart.stories.tsx

* Staging changes to fix number card titles, combo chart axis, and using dynamic filters (#386)

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>

* db init command pass through

* combo chart fixes (#387)

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>

* clarifying question and connection logic

* pino pretty error fix

* clarifying is a finishing tool

* change update latest version logic

* Update support endpoint

* fixes for horizontal bar charts and added the combo chart logic to update metrics (#388)

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>

* permission fix on dashboard metric handlers for workspace and data admin

* Add more try catches

* Hide avatar is no more

* Horizontal bar fixes (#389)

* fixes for horizontal bar charts and added the combo chart logic to update metrics

* hopefully fixed horizontal bar charts

---------

Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>

* reasoning shimmer update

* Make the embed flow work with versions

* new account warning update

* Move support modal

* compact number for pie label

* Add final reasoning message tracking and workflow start time to chunk processor and related steps

- Introduced `finalReasoningMessage` to schemas in `analyst-step`, `mark-message-complete-step`, and `create-todos-step`.
- Updated `ChunkProcessor` to calculate and store the final reasoning message based on workflow duration.
- Enhanced various steps to utilize the new `workflowStartTime` for better tracking of execution duration.
- Improved database update logic to include `finalReasoningMessage` when applicable.

* 9 digit cutoff for pie

* trigger update

* test on mastra braintrust

* test deployment

* testing

* pnpm install

* pnpm

* node 22

* pnpm version

* trigger main

* get initial chat file

* hono main deploymenbt

* clear timeouts

* Remove console logs

* migration test to staging

* db url

* try again

* k get rid of tls var

* hmmm lets try this

* mark migrations

* fix migration file?

* drizzle-kit upgrade

* tweaks to the github actions

* clean up workflows db migration

* cert location copy moved

* Prism highlighting update

* Fix merge conflicts

---------

Co-authored-by: dal <dallin@buster.so>
Co-authored-by: porter-deployment-app[bot] <87230664+porter-deployment-app[bot]@users.noreply.github.com>
Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>
Co-authored-by: jacob-buster <jacob@buster.so>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: natemkelley <natemkelley@gmail.com>

* Remove logs

* Remove logs (#395)

* Fix broken web unit tests

* Remove useless test

* Update helpers.test.ts

* Create tsconfig.json

---------

Co-authored-by: Nate Kelley <nate@buster.so>
Co-authored-by: porter-deployment-app[bot] <87230664+porter-deployment-app[bot]@users.noreply.github.com>
Co-authored-by: Nate Kelley <133379588+nate-kelley-buster@users.noreply.github.com>
Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local>
Co-authored-by: jacob-buster <jacob@buster.so>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: natemkelley <natemkelley@gmail.com>
This commit is contained in:
dal 2025-07-03 06:32:18 -07:00 committed by GitHub
parent 6d90f585cf
commit 3db2cf72b2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
10401 changed files with 155777 additions and 31818 deletions

BIN
.DS_Store vendored

Binary file not shown.

View File

@ -12,6 +12,7 @@ SUPABASE_SERVICE_ROLE_KEY="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.ey AgCiAgICAicm9
POSTHOG_TELEMETRY_KEY="phc_zZraCicSTfeXX5b9wWQv2rWG8QB4Z3xlotOT7gFtoNi"
TELEMETRY_ENABLED="true"
MAX_RECURSION="15"
SUPABASE_ANON_KEY="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.ey AgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE"
# AI VARS
RERANK_API_KEY="your_rerank_api_key"
@ -27,3 +28,12 @@ NEXT_PUBLIC_SUPABASE_URL="http://kong:8000" # External URL for Supabase (Kong pr
NEXT_PUBLIC_WS_URL="ws://localhost:3001"
NEXT_PUBLIC_SUPABASE_ANON_KEY="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.ey AgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE"
NEXT_PRIVATE_SUPABASE_SERVICE_ROLE_KEY="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.ey AgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q"
# TS SERVER
SERVER_PORT=3002
# ELECTRIC
ELECTRIC_PROXY_URL=http://localhost:3003
ELECTRIC_PORT=3003
ELECTRIC_INSECURE=false
ELECTRIC_SECRET=my-little-buttercup-has-the-sweetest-smile

View File

@ -27,17 +27,7 @@ updates:
include: "scope"
- package-ecosystem: "npm"
directory: "/api"
schedule:
interval: "weekly"
source-branch: "main"
target-branch: "staging"
open-pull-requests-limit: 10
labels:
- "dependencies"
- package-ecosystem: "npm"
directory: "/web"
directory: "/"
schedule:
interval: "weekly"
source-branch: "main"

156
.github/workflows/biome-lint.yml vendored Normal file
View File

@ -0,0 +1,156 @@
name: Biome Lint Check
on:
push:
branches: [main]
pull_request:
branches: [main]
paths:
- '**/*.ts'
- '**/*.tsx'
- '**/*.js'
- '**/*.jsx'
- '**/*.json'
- 'biome.json'
- '.github/workflows/biome-lint.yml'
jobs:
biome-lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22.9.0'
cache: 'pnpm'
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Run Biome lint check
run: pnpm run check
- name: Upload node_modules
uses: actions/upload-artifact@v4
with:
name: node_modules
path: node_modules/
retention-days: 1
build-database:
runs-on: ubuntu-latest
needs: biome-lint
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22.9.0'
- name: Download node_modules
uses: actions/download-artifact@v4
with:
name: node_modules
path: node_modules/
- name: Build database package
run: pnpm run build
working-directory: packages/database
build-access-controls:
runs-on: ubuntu-latest
needs: [biome-lint, build-database]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22.9.0'
- name: Download node_modules
uses: actions/download-artifact@v4
with:
name: node_modules
path: node_modules/
- name: Build access-controls package
run: pnpm run build
working-directory: packages/access-controls
build-data-source:
runs-on: ubuntu-latest
needs: [biome-lint, build-database]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22.9.0'
- name: Download node_modules
uses: actions/download-artifact@v4
with:
name: node_modules
path: node_modules/
- name: Build data-source package
run: pnpm run build
working-directory: packages/data-source
build-web-tools:
runs-on: ubuntu-latest
needs: [biome-lint, build-database]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22.9.0'
- name: Download node_modules
uses: actions/download-artifact@v4
with:
name: node_modules
path: node_modules/
- name: Build web-tools package
run: pnpm run build
working-directory: packages/web-tools
build-test-utils:
runs-on: ubuntu-latest
needs: [biome-lint, build-database]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22.9.0'
- name: Download node_modules
uses: actions/download-artifact@v4
with:
name: node_modules
path: node_modules/
- name: Build test-utils package
run: pnpm run build
working-directory: packages/test-utils

View File

@ -0,0 +1,44 @@
name: Database Migrations
on:
push:
branches: [main, staging]
jobs:
migrate:
runs-on: blacksmith-8vcpu-ubuntu-2204
environment: ${{ github.ref_name }}
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22.x'
- name: Install pnpm
uses: pnpm/action-setup@v2
with:
version: 9.15.0
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v3
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Run migrations
run: pnpm run db:migrate
env:
DATABASE_URL: ${{ secrets.DB_URL }}
NODE_TLS_REJECT_UNAUTHORIZED: '0'

View File

@ -67,6 +67,7 @@ jobs:
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'pnpm'
- name: Set up Rust toolchain
uses: actions-rs/toolchain@v1
@ -165,12 +166,12 @@ jobs:
echo "Bumping Web version using spec: $VERSION_SPEC..."
cd web
OLD_WEB_VERSION=$(jq -r .version package.json)
npm version "$VERSION_SPEC" --no-git-tag-version --allow-same-version
pnpm version "$VERSION_SPEC" --no-git-tag-version --allow-same-version
NEW_WEB_VERSION=$(jq -r .version package.json)
echo "Web: $OLD_WEB_VERSION -> $NEW_WEB_VERSION"
cd ..
if [[ "$OLD_WEB_VERSION" != "$NEW_WEB_VERSION" ]]; then
git add web/package.json web/package-lock.json
git add web/package.json pnpm-lock.yaml
COMMIT_MESSAGE_PREFIX="$COMMIT_MESSAGE_PREFIX bump web to v$NEW_WEB_VERSION;"
COMMIT_CHANGES=true
echo "new_web_version=$NEW_WEB_VERSION" >> $GITHUB_OUTPUT

View File

@ -0,0 +1,32 @@
"on":
push:
branches:
- mastra-braintrust
paths:
- apps/server/**
- packages/**
name: Deploy to buster-server
jobs:
porter-deploy:
runs-on: blacksmith-32vcpu-ubuntu-2204
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set Github tag
id: vars
run: echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: Setup porter
uses: porter-dev/setup-porter@v0.1.0
- name: Deploy stack
timeout-minutes: 30
run: exec porter apply
env:
PORTER_APP_NAME: buster-server
PORTER_CLUSTER: "3155"
PORTER_DEPLOYMENT_TARGET_ID: 7f44813f-4b0c-4be7-add0-94ebb61256bf
PORTER_HOST: https://dashboard.porter.run
PORTER_PR_NUMBER: ${{ github.event.number }}
PORTER_PROJECT: "9309"
PORTER_REPO_NAME: ${{ github.event.repository.name }}
PORTER_TAG: ${{ steps.vars.outputs.sha_short }}
PORTER_TOKEN: ${{ secrets.PORTER_APP_9309_3155 }}

View File

@ -0,0 +1,30 @@
on:
push:
branches:
- main
paths:
- apps/server/**
- packages/**
name: Deploy to Porter
jobs:
porter-deploy:
runs-on: blacksmith-32vcpu-ubuntu-2204
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set Github tag
id: vars
run: echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: Setup porter
uses: porter-dev/setup-porter@v0.1.0
- name: Deploy stack
timeout-minutes: 30
run: exec porter apply
env:
PORTER_CLUSTER: 3155
PORTER_HOST: https://dashboard.porter.run
PORTER_PROJECT: 9309
PORTER_APP_NAME: main-hono-server
PORTER_TAG: ${{ steps.vars.outputs.sha_short }}
PORTER_TOKEN: ${{ secrets.PORTER_APP_9309_3155 }}
PORTER_DEPLOYMENT_TARGET_ID: 7f44813f-4b0c-4be7-add0-94ebb61256bf

View File

@ -3,48 +3,10 @@
branches:
- main
paths:
- api/**
- apps/api/**
- .github/workflows/porter_app_main_3155.yml
name: Deploy to main
jobs:
database-deploy:
runs-on: blacksmith-16vcpu-ubuntu-2204
environment: main
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
override: true
- name: Cache Rust dependencies
uses: Swatinem/rust-cache@v2
- name: Install Diesel CLI
run: cargo install diesel_cli --no-default-features --features postgres
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Download Postgres certificate from S3
run: |
mkdir -p ~/.postgresql
aws s3 cp ${{ secrets.CERT_S3_URL }} ~/.postgresql/root.crt
- name: Run migrations
working-directory: ./api
run: diesel migration run
env:
DATABASE_URL: ${{ secrets.DB_URL }}
PGSSLMODE: verify-full
porter-deploy:
runs-on: blacksmith-32vcpu-ubuntu-2204
environment: main
@ -59,7 +21,7 @@ jobs:
aws-region: ${{ secrets.AWS_REGION }}
- name: Download SSL certificate from S3
run: |
aws s3 cp ${{ secrets.CERT_S3_URL }} ./api/cert.pem
aws s3 cp ${{ secrets.CERT_S3_URL }} ./apps/api/cert.pem
- name: Set Github tag
id: vars
run: echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT

View File

@ -0,0 +1,41 @@
"on":
push:
branches:
- mastra-braintrust
paths:
- apps/api/**
name: Deploy to mastra-braintrust-api
jobs:
porter-deploy:
runs-on: blacksmith-16vcpu-ubuntu-2204
environment: staging
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Download SSL certificate from S3
run: |
aws s3 cp ${{ secrets.CERT_S3_URL }} ./apps/api/cert.pem
- name: Set Github tag
id: vars
run: echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: Setup porter
uses: porter-dev/setup-porter@v0.1.0
- name: Deploy stack
timeout-minutes: 30
run: exec porter apply
env:
PORTER_APP_NAME: mastra-braintrust-api
PORTER_CLUSTER: "3155"
PORTER_DEPLOYMENT_TARGET_ID: 7f44813f-4b0c-4be7-add0-94ebb61256bf
PORTER_HOST: https://dashboard.porter.run
PORTER_PR_NUMBER: ${{ github.event.number }}
PORTER_PROJECT: "9309"
PORTER_REPO_NAME: ${{ github.event.repository.name }}
PORTER_TAG: ${{ steps.vars.outputs.sha_short }}
PORTER_TOKEN: ${{ secrets.PORTER_APP_9309_3155 }}

View File

@ -3,48 +3,9 @@
branches:
- staging
paths:
- api/**
- apps/api/**
name: Deploy to staging
jobs:
database-deploy:
runs-on: blacksmith-16vcpu-ubuntu-2204
environment: staging
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
override: true
- name: Cache Rust dependencies
uses: Swatinem/rust-cache@v2
- name: Install Diesel CLI
run: cargo install diesel_cli --no-default-features --features postgres
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Download Postgres certificate from S3
run: |
mkdir -p ~/.postgresql
aws s3 cp ${{ secrets.CERT_S3_URL }} ~/.postgresql/root.crt
- name: Run migrations
working-directory: ./api
run: diesel migration run
env:
DATABASE_URL: ${{ secrets.DB_URL }}
PGSSLMODE: verify-full
porter-deploy:
runs-on: blacksmith-16vcpu-ubuntu-2204
environment: staging
@ -59,7 +20,7 @@ jobs:
aws-region: ${{ secrets.AWS_REGION }}
- name: Download SSL certificate from S3
run: |
aws s3 cp ${{ secrets.CERT_S3_URL }} ./api/cert.pem
aws s3 cp ${{ secrets.CERT_S3_URL }} ./apps/api/cert.pem
- name: Set Github tag
id: vars
run: echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT

View File

@ -0,0 +1,47 @@
name: Deploy Trigger.dev Tasks - Production
on:
push:
branches:
- main
jobs:
deploy-production:
runs-on: blacksmith-8vcpu-ubuntu-2204
environment: production
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22.x'
- name: Install pnpm
uses: pnpm/action-setup@v2
with:
version: 9.15.0
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v3
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: 🚀 Deploy to Production
env:
TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }}
run: |
cd apps/trigger
pnpm dlx trigger.dev@v4-beta deploy --env production

View File

@ -0,0 +1,47 @@
name: Deploy Trigger.dev Tasks - Staging
on:
push:
branches:
- staging
jobs:
deploy-staging:
runs-on: blacksmith-8vcpu-ubuntu-2204
environment: staging
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22.x'
- name: Install pnpm
uses: pnpm/action-setup@v2
with:
version: 9.15.0
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v3
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: 🚀 Deploy to Staging
env:
TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }}
run: |
cd apps/trigger
pnpm dlx trigger.dev@v4-beta deploy --env staging

View File

@ -24,19 +24,24 @@ jobs:
with:
node-version: "22"
- name: Mount NPM Cache
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
version: 9
- name: Mount PNPM Cache
uses: useblacksmith/stickydisk@v1
with:
key: frontend-npm-${{ github.sha }}
path: ~/.npm
key: frontend-pnpm-${{ github.sha }}
path: ~/.pnpm-store
- name: Install Frontend Dependencies
working-directory: ./web
run: npm install
run: pnpm install --frozen-lockfile
- name: Build Frontend
working-directory: ./web
run: npm run build
run: pnpm run build
env:
NEXT_PUBLIC_API_URL: http://localhost:3001
NEXT_PUBLIC_URL: http://localhost:3000
@ -104,11 +109,16 @@ jobs:
with:
node-version: "22"
- name: Mount NPM Cache
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
version: 9
- name: Mount PNPM Cache
uses: useblacksmith/stickydisk@v1
with:
key: tests-npm-${{ github.sha }}
path: ~/.npm
key: tests-pnpm-${{ github.sha }}
path: ~/.pnpm-store
- name: Download Frontend Build
uses: actions/download-artifact@v4
@ -123,8 +133,8 @@ jobs:
- name: Install Dependencies for Testing
working-directory: ./web
run: |
npm install
npx playwright install --with-deps
pnpm install --frozen-lockfile
pnpm exec playwright install --with-deps
- name: Download API Image
uses: actions/download-artifact@v4
@ -175,7 +185,7 @@ jobs:
- name: Run Playwright Tests
working-directory: ./web
run: |
npx playwright test
pnpm exec playwright test
env:
CI: "true"
DEBUG: "pw:api"

View File

@ -21,13 +21,16 @@ jobs:
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: "npm"
cache-dependency-path: "web/package-lock.json"
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
version: 9
- name: Install dependencies
run: npm ci
run: pnpm install --frozen-lockfile
working-directory: ./web
- name: Run linting
run: npm run lint:ci
run: pnpm run lint:ci
working-directory: ./web

View File

@ -15,17 +15,17 @@ jobs:
working-directory: web
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v3
uses: actions/setup-node@v4
with:
node-version: "22"
cache: "npm"
cache-dependency-path: web/package-lock.json
cache: "pnpm"
cache-dependency-path: pnpm-lock.yaml
- name: Install dependencies
run: npm ci
run: pnpm install --frozen-lockfile
- name: Run Jest tests
run: npm run test
run: pnpm run test

28
.gitignore vendored
View File

@ -11,6 +11,8 @@ crash.*.log
.fastembed_cache/
target/
# Exclude all .tfvars files, which are likely to contain sensitive data, such as
# password, private keys, and other secrets. These should not be part of version
# control as they are data points which are potentially sensitive and subject
@ -40,12 +42,8 @@ terraform.rc
.env
# Generated by Cargo
# will have compiled files and executables
api/debug/
api/target/
api/build/
api/dist/
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
@ -63,6 +61,9 @@ Cargo.lock
# Node.js dependencies
node_modules/
# Turborepo
.turbo
.secrets
/prds
@ -72,4 +73,17 @@ web/playwright-tests/auth-utils/auth.json
**/.claude/settings.local.json
**/*.private.*
.trigger
.claude
# Drizzle specific
drizzle/.env
drizzle/.env.*
drizzle/*.log
drizzle/meta/
**/evals/**/*.eval.private.ts
*.tsbuildinfo
/packages/aTest/.mastra
**/*.private.*

25
.pnpmrc Normal file
View File

@ -0,0 +1,25 @@
# Enable workspace for monorepo
link-workspace-packages=true
prefer-workspace-packages=true
# Store directory for caching (uncommented for Turborepo)
store-dir=~/.pnpm-store
# Selective hoisting for shared tools
hoist=true
hoist-pattern[]=*eslint*
hoist-pattern[]=*typescript*
# Peer dependency handling
auto-install-peers=true
strict-peer-dependencies=false
# Lockfile and compatibility
lockfile-include-tarball-url=false
shamefully-hoist=false
# Registry (keep default unless using private registry)
registry=https://registry.npmjs.org/
# Enable side effects caching for performance
side-effects-cache=true

19
.vscode/buster.code-workspace vendored Normal file
View File

@ -0,0 +1,19 @@
{
"folders": [
{ "path": ".." },
{ "path": "../apps/web" }
],
"settings": {
"editor.defaultFormatter": "biomejs.biome",
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"quickfix.biome": "explicit",
"source.organizeImports.biome": "explicit"
},
"typescript.preferences.importModuleSpecifier": "relative",
"typescript.suggest.autoImports": true,
"typescript.updateImportsOnFileMove.enabled": "always",
"vitest.maximumConfigs": 25
}
}

37
.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,37 @@
{
"editor.defaultFormatter": "biomejs.biome",
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"quickfix.biome": "explicit",
"source.organizeImports.biome": "explicit"
},
"typescript.preferences.importModuleSpecifier": "relative",
"typescript.suggest.autoImports": true,
"typescript.updateImportsOnFileMove.enabled": "always",
// Default Biome formatting for all file types
"[typescript]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[typescriptreact]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[javascript]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[javascriptreact]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[json]": {
"editor.defaultFormatter": "biomejs.biome"
},
// Resource-specific settings for apps/web directory
"[{apps/web/**/*.ts,apps/web/**/*.tsx,apps/web/**/*.js,apps/web/**/*.jsx,apps/web/**/*.json}]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll.eslint": "explicit"
}
}
}

612
CLAUDE.md Normal file
View File

@ -0,0 +1,612 @@
# CLAUDE.md
This file provides guidance to Claude Code when working with code in this monorepo.
## Monorepo Structure
This is a pnpm-based monorepo using Turborepo with the following structure:
### Apps (`@buster-app/*`)
- `apps/web` - Next.js frontend application
- `apps/server` - Node.js/Hono backend server
- `apps/trigger` - Background job processing with Trigger.dev v3
- `apps/electric-server` - Electric SQL sync server
- `apps/api` - Rust backend API (legacy)
- `apps/cli` - Command-line tools (Rust)
### Packages (`@buster/*`)
- `packages/ai` - AI agents, tools, and workflows using Mastra framework
- `packages/database` - Database schema, migrations, and utilities (Drizzle ORM)
- `packages/data-source` - Data source adapters (PostgreSQL, MySQL, BigQuery, Snowflake, etc.)
- `packages/access-controls` - Permission and access control logic
- `packages/stored-values` - Stored values management
- `packages/rerank` - Document reranking functionality
- `packages/server-shared` - Shared server types and utilities
- `packages/test-utils` - Shared testing utilities
- `packages/vitest-config` - Shared Vitest configuration
- `packages/typescript-config` - Shared TypeScript configuration
- `packages/web-tools` - Web scraping and research tools
- `packages/slack` - Standalone Slack integration (OAuth, messaging, channels)
- `packages/supabase` - Supabase setup and configuration
## Development Workflow
When writing code, follow this workflow to ensure code quality:
### 1. Write Modular, Testable Functions
- Create small, focused functions with single responsibilities
- Design functions to be easily testable with clear inputs/outputs
- Use dependency injection for external dependencies
- Follow existing patterns in the codebase
### 2. Build Features by Composing Functions
- Combine modular functions to create complete features
- Keep business logic separate from infrastructure concerns
- Use proper error handling at each level
### 3. Ensure Type Safety
```bash
# Build entire monorepo to check types
turbo run build
# Build specific package/app
turbo run build --filter=@buster/ai
turbo run build --filter=@buster-app/web
# Type check without building
turbo run typecheck
turbo run typecheck --filter=@buster/database
```
### 4. Run Biome for Linting & Formatting
```bash
# Check files with Biome
pnpm run check path/to/file.ts
pnpm run check packages/ai
# Auto-fix linting, formatting, and import organization
pnpm run check:fix path/to/file.ts
pnpm run check:fix packages/ai
```
### 5. Run Tests with Vitest
```bash
# Run all tests
pnpm run test
# Run tests for specific package
turbo run test --filter=@buster/ai
# Run specific test file
pnpm run test path/to/file.test.ts
# Watch mode for development
pnpm run test:watch
```
## Code Quality Standards
### TypeScript Configuration
- **Strict mode enabled** - All strict checks are on
- **No implicit any** - Always use specific types
- **Strict null checks** - Handle null/undefined explicitly
- **No implicit returns** - All code paths must return
- **Consistent file casing** - Enforced by TypeScript
### Biome Rules (Key Enforcements)
- **`useImportType: "warn"`** - Use type-only imports when possible
- **`noExplicitAny: "error"`** - Never use `any` type
- **`noUnusedVariables: "error"`** - Remove unused code
- **`noNonNullAssertion: "error"`** - No `!` assertions
- **`noConsoleLog: "warn"`** - Avoid console.log in production
- **`useNodejsImportProtocol: "error"`** - Use `node:` prefix for Node.js imports
### Testing Practices
#### Test File Naming & Location
- **Unit tests**: `filename.test.ts` (alongside the source file)
- **Integration tests**: `filename.int.test.ts` (alongside the source file)
- Never separate tests into their own folders - keep them with the code they test
#### Testing Strategy
1. **Prioritize mocking** for unit tests after understanding API/DB structure
2. **Integration tests** should focus on single connection confirmations
3. **Mock external dependencies** appropriately
4. **Use descriptive test names** that explain the behavior
5. **Write tests alongside implementation** for better coverage
#### Example Test Structure
```typescript
// user-service.ts
export function getUserById(id: string) { /* ... */ }
// user-service.test.ts (same directory)
import { describe, it, expect, vi } from 'vitest';
import { getUserById } from './user-service';
describe('getUserById', () => {
it('should return user when found', async () => {
// Test implementation
});
});
// user-service.int.test.ts (integration test)
import { describe, it, expect } from 'vitest';
import { getUserById } from './user-service';
describe('getUserById integration', () => {
it('should connect to database successfully', async () => {
// Single connection test
});
});
```
## Code Style Preferences
### Type Safety
- **Zod-First Approach** - Use Zod schemas as the single source of truth for both validation and types
- **Use `z.infer<typeof schema>` for types** - Prefer inferred types over separate interfaces
- **Never use `any`** - Biome enforces this with `noExplicitAny: "error"`
- **Avoid `unknown` unless necessary** - Prefer specific types or properly typed unions
- **Handle null/undefined explicitly** - TypeScript strict mode enforces this
- **Safe array access** - Use validation helpers when needed
- **Type-only imports** - Use `import type` for better performance
#### Zod-First Type Safety Pattern
```typescript
// ✅ Good: Zod schema as single source of truth
const userSchema = z.object({
id: z.string().min(1),
email: z.string().email(),
role: z.enum(['admin', 'user']),
});
type User = z.infer<typeof userSchema>; // Inferred type
// ✅ Good: Safe runtime validation
const validatedUser = userSchema.parse(rawData);
// ✅ Good: Safe array access when needed
import { validateArrayAccess } from '@buster/ai/utils/validation-helpers';
const firstItem = validateArrayAccess(array, 0, 'user processing');
// ❌ Avoid: Separate interface + unsafe access
interface User {
id: string;
email: string;
}
const user = rawData as User; // Unsafe type assertion
const firstItem = array[0]!; // Non-null assertion not allowed
```
### Import Organization
- Use **type-only imports** when importing only types: `import type { SomeType } from './types'`
- Biome automatically organizes imports with `pnpm run check:fix`
- Use Node.js protocol: `import { readFile } from 'node:fs'`
- Follow path aliases defined in each package's tsconfig.json
### String Handling
- **Prefer template literals** over string concatenation for better readability
- Use template literals for multi-line strings and string interpolation
#### String Handling Patterns
```typescript
// ✅ Good: Template literals
const message = `User ${userId} not found`;
const multiLine = `This is a
multi-line string`;
const path = `${baseUrl}/api/users/${userId}`;
// ❌ Avoid: String concatenation
const message = 'User ' + userId + ' not found';
const path = baseUrl + '/api/users/' + userId;
```
### Error Handling
- **Always use try-catch blocks** for async operations and external calls
- **Never use `any` in catch blocks** - Biome enforces this
- **Validate external data** with Zod schemas before processing
- **Provide meaningful error messages** with context for debugging
- **Handle errors at appropriate levels** - don't let errors bubble up uncaught
- **Use structured logging** for error tracking
#### Error Handling Patterns
```typescript
// ✅ Good: Comprehensive error handling
async function processUserData(userId: string) {
try {
const user = await getUserById(userId);
if (!user) {
throw new Error(`User not found: ${userId}`);
}
const validatedData = UserSchema.parse(user);
return await processData(validatedData);
} catch (error) {
logger.error('Failed to process user data', {
userId,
error: error instanceof Error ? error.message : 'Unknown error',
stack: error instanceof Error ? error.stack : undefined
});
throw new Error(`User data processing failed: ${error instanceof Error ? error.message : 'Unknown error'}`);
}
}
// ✅ Good: Database operations with error handling
async function createResource(data: CreateResourceInput) {
try {
const validatedData = CreateResourceSchema.parse(data);
return await db.transaction(async (tx) => {
const resource = await tx.insert(resources).values(validatedData).returning();
await tx.insert(resourceAudit).values({
resourceId: resource[0].id,
action: 'created',
createdAt: new Date()
});
return resource[0];
});
} catch (error) {
if (error instanceof ZodError) {
throw new Error(`Invalid resource data: ${error.errors.map(e => e.message).join(', ')}`);
}
logger.error('Database error creating resource', { data, error });
throw new Error('Failed to create resource');
}
}
// ❌ Avoid: Unhandled async operations
async function badExample(userId: string) {
const user = await getUserById(userId); // No error handling
return user.data; // Could fail if user is null
}
```
## Test Utilities
The `@buster/test-utils` package provides shared testing utilities:
### Environment Helpers
```typescript
import { setupTestEnvironment, withTestEnv } from '@buster/test-utils/env-helpers';
// Manual setup/teardown
beforeAll(() => setupTestEnvironment());
afterAll(() => cleanupTestEnvironment());
// Or use the wrapper
await withTestEnv(async () => {
// Your test code here
});
```
### Mock Helpers
```typescript
import { createMockFunction, mockConsole, createMockDate } from '@buster/test-utils/mock-helpers';
// Create vitest mock functions
const mockFn = createMockFunction<(arg: string) => void>();
// Mock console methods (allowed in tests)
const consoleMock = mockConsole();
// Test code that logs...
consoleMock.restore();
// Mock dates for time-sensitive tests
const dateMock = createMockDate(new Date('2024-01-01'));
// Test code...
dateMock.restore();
```
### Database Test Helpers
```typescript
import { createTestChat, cleanupTestChats } from '@buster/test-utils/database/chats';
import { createTestMessage, cleanupTestMessages } from '@buster/test-utils/database/messages';
// Create test data
const chat = await createTestChat({
userId: 'test-user',
title: 'Test Chat'
});
const message = await createTestMessage({
chatId: chat.id,
role: 'user',
content: 'Test message'
});
// Cleanup after tests
await cleanupTestMessages(chat.id);
await cleanupTestChats('test-user');
```
## Quick Command Reference
### Building & Type Checking
```bash
# Build all packages
turbo run build
# Build specific package/app
turbo run build --filter=@buster/ai
turbo run build --filter=@buster-app/web
# Type check only
turbo run typecheck
turbo run typecheck --filter=@buster/database
```
### Linting & Formatting
```bash
# Check and auto-fix with Biome
pnpm run check:fix path/to/file.ts
pnpm run check:fix packages/ai
# Check only (no fixes)
pnpm run check path/to/file.ts
```
### Testing
```bash
# Run all tests
pnpm run test
# Run tests for specific package
turbo run test --filter=@buster/ai
# Run specific test file
pnpm run test path/to/file.test.ts
# Watch mode
pnpm run test:watch
```
### Database Commands
```bash
pnpm run db:generate # Generate types from schema
pnpm run db:migrate # Run migrations
pnpm run db:push # Push schema changes
pnpm run db:studio # Open Drizzle Studio
```
## Helper Organization Pattern
When building helper functions, follow this organizational pattern:
### Database Helpers (in `packages/database/`)
```
packages/database/src/helpers/
├── index.ts # Export all helpers
├── messages.ts # Message-related helpers
├── users.ts # User-related helpers
├── chats.ts # Chat-related helpers
└── {entity}.ts # Entity-specific helpers
```
### Package-Specific Utilities
```
packages/{package}/src/utils/
├── index.ts # Export all utilities
├── {domain}/ # Domain-specific utilities
│ ├── index.ts
│ └── helpers.ts
└── helpers.ts # General helpers
```
### Key Principles
- **Co-locate helpers** with the schema/types they operate on
- **Group by entity** (one file per database table/domain object)
- **Export from package root** for easy importing
- **Use TypeScript** with proper types (no `any`)
- **Follow naming conventions** that clearly indicate purpose
### Example Usage
```typescript
// ✅ Good: Clear, typed helpers exported from package root
import { getRawLlmMessages, getMessagesForChat } from '@buster/database';
// ❌ Avoid: Direct database queries scattered throughout codebase
import { db, messages, eq } from '@buster/database';
const result = await db.select().from(messages).where(eq(messages.chatId, chatId));
```
## Background Job Processing (Trigger.dev)
The `apps/trigger` package provides background job processing using **Trigger.dev v3**.
### 🚨 CRITICAL: Always Use v3 Patterns
```typescript
// ✅ CORRECT - Always use this pattern
import { task } from '@trigger.dev/sdk/v3';
export const myTask = task({
id: 'my-task',
run: async (payload: InputType): Promise<OutputType> => {
// Task implementation
},
});
```
### Essential Requirements
1. **MUST export every task** from the file
2. **MUST use unique task IDs** within the project
3. **MUST import from** `@trigger.dev/sdk/v3`
4. **Use Zod schemas** for payload validation
### Common Task Patterns
#### Schema-Validated Task (Recommended)
```typescript
import { schemaTask } from '@trigger.dev/sdk/v3';
import { z } from 'zod';
// Define schema for type safety
export const TaskInputSchema = z.object({
userId: z.string(),
data: z.record(z.unknown()),
});
export type TaskInput = z.infer<typeof TaskInputSchema>;
export const processUserTask = schemaTask({
id: 'process-user',
schema: TaskInputSchema,
maxDuration: 300, // 5 minutes
run: async (payload) => {
// Payload is validated and typed
return { success: true };
},
});
```
#### Triggering Tasks
```typescript
import { tasks } from '@trigger.dev/sdk/v3';
import type { processUserTask } from '@buster-app/trigger/tasks';
// Trigger from API routes
const handle = await tasks.trigger<typeof processUserTask>('process-user', {
userId: 'user123',
data: {}
});
```
### Development Commands
```bash
# Development server
pnpm run trigger:dev
# Run tests
pnpm run trigger:test
# Deploy
pnpm run trigger:deploy
```
**See `apps/trigger/CLAUDE.md` for complete Trigger.dev guidelines.**
## Key Dependencies
- **Turborepo** - Monorepo orchestration and caching
- **pnpm** - Fast, disk space efficient package manager
- **Biome** - Fast linting and formatting (replaces ESLint/Prettier)
- **TypeScript** - Strict type checking across all packages
- **Vitest** - Fast unit testing framework
- **Zod** - Runtime validation and type inference
- **Mastra** - AI agent framework for LLM workflows
- **Trigger.dev v3** - Background job processing
- **Drizzle ORM** - Type-safe database toolkit
- **Braintrust** - LLM observability and evaluation
## Complete Development Workflow Example
When implementing a new feature:
```bash
# 1. Write your modular, testable functions
# 2. Compose them into the feature
# 3. Write tests alongside the code
# 4. Ensure type safety
turbo run build --filter=@buster/ai
# or for all packages:
turbo run build
# 5. Fix linting and formatting
pnpm run check:fix packages/ai
# 6. Run tests
turbo run test --filter=@buster/ai
# or specific test:
pnpm run test packages/ai/src/feature.test.ts
# 7. If all passes, commit your changes
git add .
git commit -m "feat: add new feature"
```
## Slack Package (@buster/slack)
The `@buster/slack` package is a **standalone Slack integration** with no database dependencies. It provides:
### Features
- **OAuth 2.0 Authentication** - Complete OAuth flow with state management
- **Channel Management** - List, validate, join/leave channels
- **Messaging** - Send messages, replies, updates with retry logic
- **Message Tracking** - Interface for threading support
- **Type Safety** - Zod validation throughout
### Architecture
The package uses **interface-based design** where consuming applications must implement:
- `ISlackTokenStorage` - For token persistence
- `ISlackOAuthStateStorage` - For OAuth state management
- `ISlackMessageTracking` - For message threading (optional)
### Usage Pattern
```typescript
// All functions accept tokens as parameters
const channels = await channelService.getAvailableChannels(accessToken);
const result = await messagingService.sendMessage(accessToken, channelId, message);
```
### Testing
```bash
# Run tests
turbo run test --filter=@buster/slack
# Build
turbo run build --filter=@buster/slack
# Type check
turbo run typecheck --filter=@buster/slack
```
### Key Principles
- **No database dependencies** - Uses interfaces for storage
- **Token-based** - All functions accept tokens as parameters
- **Framework-agnostic** - Works with any Node.js application
- **Comprehensive error handling** - Typed errors with retry logic
## Important Notes
- **Never use `any`** - Biome will error on this
- **Always handle errors** properly with try-catch
- **Write tests alongside code** - not in separate folders
- **Use Zod for validation** - single source of truth
- **Run type checks** before committing
- **Follow existing patterns** in the codebase
This ensures high code quality and maintainability across the monorepo.
## Common Biome Overrides
Test files have relaxed rules to allow:
- `console.log` for debugging tests
- Non-null assertions (`!`) in test scenarios
- `any` type when mocking (though prefer proper types)
Database package allows `any` for Drizzle ORM compatibility.
## Environment Variables
The monorepo uses a strict environment mode. Key variables include:
- Database connections (Supabase, PostgreSQL, etc.)
- API keys (OpenAI, Anthropic, etc.)
- Service URLs and configurations
See `.env.example` files in each package for required variables.
# important-instruction-reminders
Do what has been asked; nothing more, nothing less.
NEVER create files unless they're absolutely necessary for achieving your goal.
ALWAYS prefer editing an existing file to creating a new one.
NEVER proactively create documentation files (*.md) or README files. Only create documentation files if explicitly requested by the User.
## Biome Linting Instructions
### Linting Rules
- Always use `pnpm run check` or `pnpm run check:fix`
- **Rule: `i dont' want caldue to ever run a biome lint fix only biome lint`**
- This means ONLY use `pnpm run check` (linting without auto-fixing)
- Do NOT use `pnpm run check:fix`
- Claude should understand to ONLY run lint checks, never auto-fix

View File

@ -1,19 +0,0 @@
ok I need you to implement the prds/active/$ARGUMENTS.md prd following it in
order and accomplishing the tasks while referencing the prd, its notes, and recommendations.
make sure to mark off completed tasks as you go.
you should follow best practices as related in documentation/ for database migrations, testing.mdc, handlers.mdc,
etc. Please analyze them before you modify a file.
Particularly you should always reference: documentation/testing.mdc before writing tests.
please analyze all files before proceeding with any implementations.
feel free to explore the codebase while implementing the prd.
you should think hard about your implementation and then implement carefully.
you are not done until the tests for your specific file are finished and run successfully and a cargo check runs successfully.
please reference the prd frequently to ensure you are on track with the work..

View File

@ -1,11 +0,0 @@
I need you to create an integration testing plan for $ARGUMENTS
These are integration tests and I want them to be inline in rust fashion.
If the code is difficult to test, you should suggest refactoring to make it easier to test.
Think really hard about the code, the tests, and the refactoring (if applicable).
Will you come up with test cases and let me review before you write the tests?
Feel free to ask clarifying questions.

View File

@ -1,11 +0,0 @@
I need you to create a unit testing plan for $ARGUMENTS
These are unit tests and I want them to be inline in rust fashion.
If the code is difficult to test, you should suggest refactoring to make it easier to test.
Think really hard about the code, the tests, and the refactoring (if applicable).
Will you come up with test cases and let me review before you write the tests?
Feel free to ask clarifying questions.

View File

@ -1,513 +0,0 @@
use std::collections::{HashMap, HashSet};
use std::sync::Arc;
use agents::{Agent, AgentMessage};
use anyhow::Result;
use async_trait::async_trait;
use database::schema::metric_files;
use database::{
models::{DashboardFile, MetricFile},
pool::get_pg_pool,
schema::{chats, dashboard_files, messages},
};
use diesel::prelude::*;
use diesel_async::RunQueryDsl;
use litellm::{FunctionCall, ToolCall, MessageProgress};
use middleware::AuthenticatedUser;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use uuid::Uuid;
use super::ContextLoader;
// --- Structs for Simulated Tool Call (handling multiple files) ---
#[derive(Serialize, Deserialize, Debug, Clone)]
struct UserManuallyModifiedFileParams {
asset_ids: Vec<Uuid>, // Changed to Vec
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct ModifiedFileInfo {
asset_id: Uuid,
version_number: i32,
yml_content: String,
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct UserManuallyModifiedFileOutput {
updated_files: Vec<ModifiedFileInfo>, // Contains details for all updated files
}
// Add a struct to deserialize the search_data_catalog output
#[derive(Deserialize, Debug)]
struct SearchDataCatalogToolOutput {
data_source_id: Option<Uuid>,
// Include other fields if needed for future context, but only data_source_id is required now
}
// --- End Structs ---
pub struct ChatContextLoader {
pub chat_id: Uuid,
}
impl ChatContextLoader {
pub fn new(chat_id: Uuid) -> Self {
Self { chat_id }
}
// Helper function to check for tool usage and set appropriate context
async fn update_context_from_tool_calls(agent: &Arc<Agent>, message: &AgentMessage) {
// Handle tool calls from assistant messages
if let AgentMessage::Assistant {
tool_calls: Some(tool_calls),
..
} = message
{
for tool_call in tool_calls {
match tool_call.function.name.as_str() {
"search_data_catalog" => {
// We will set data_context based on the *response* now,
// but keep this for potential future use or broader context setting.
// agent
// .set_state_value(String::from("data_context"), Value::Bool(true))
// .await;
}
"create_metrics" | "update_metrics" => {
agent
.set_state_value(String::from("metrics_available"), Value::Bool(true))
.await;
}
"create_dashboards" | "update_dashboards" => {
agent
.set_state_value(
String::from("dashboards_available"),
Value::Bool(true),
)
.await;
}
"import_assets" => {
// When we see import_assets, we need to check the content in the corresponding tool response
// This will be handled separately when processing tool messages
}
name if name.contains("file")
|| name.contains("read")
|| name.contains("write")
|| name.contains("edit") =>
{
agent
.set_state_value(String::from("files_available"), Value::Bool(true))
.await;
}
_ => {}
}
}
}
// Handle tool responses - important for import_assets
if let AgentMessage::Tool {
name: Some(tool_name),
content,
..
} = message
{
if tool_name == "import_assets" {
// Parse the tool response to see what was imported
if let Ok(import_result) = serde_json::from_str::<serde_json::Value>(content) {
// Check for files array
if let Some(files) = import_result.get("files").and_then(|f| f.as_array()) {
if !files.is_empty() {
// Set files_available for any imported files
agent
.set_state_value(String::from("files_available"), Value::Bool(true))
.await;
// Check each file to determine its type
let mut has_metrics = false;
let mut has_dashboards = false;
let mut has_datasets = false;
for file in files {
// Check file_type/asset_type to determine what kind of asset this is
let file_type = file
.get("file_type")
.and_then(|ft| ft.as_str())
.or_else(|| file.get("asset_type").and_then(|at| at.as_str()));
tracing::debug!(
"Processing imported file with type: {:?}",
file_type
);
match file_type {
Some("metric") => {
has_metrics = true;
// Check if the metric has dataset references
if let Some(yml_content) =
file.get("yml_content").and_then(|y| y.as_str())
{
if yml_content.contains("dataset")
|| yml_content.contains("datasetIds")
{
has_datasets = true;
}
}
}
Some("dashboard") => {
has_dashboards = true;
// Dashboards often reference metrics too
has_metrics = true;
// Check if the dashboard has dataset references via metrics
if let Some(yml_content) =
file.get("yml_content").and_then(|y| y.as_str())
{
if yml_content.contains("dataset")
|| yml_content.contains("datasetIds")
{
has_datasets = true;
}
}
}
_ => {
tracing::debug!(
"Unknown file type in import_assets: {:?}",
file_type
);
}
}
}
// Set appropriate state values based on what we found
if has_metrics {
tracing::debug!("Setting metrics_available state to true");
agent
.set_state_value(
String::from("metrics_available"),
Value::Bool(true),
)
.await;
}
if has_dashboards {
tracing::debug!("Setting dashboards_available state to true");
agent
.set_state_value(
String::from("dashboards_available"),
Value::Bool(true),
)
.await;
}
if has_datasets {
tracing::debug!("Setting data_context state to true");
agent
.set_state_value(
String::from("data_context"),
Value::Bool(true),
)
.await;
}
}
}
}
}
// NEW: Check for search_data_catalog response and extract data_source_id
if tool_name == "search_data_catalog" {
match serde_json::from_str::<SearchDataCatalogToolOutput>(content) {
Ok(output) => {
if let Some(ds_id) = output.data_source_id {
tracing::debug!(data_source_id = %ds_id, "Found data_source_id in search_data_catalog tool history, caching in agent state.");
// Cache the data_source_id
agent.set_state_value(
"data_source_id".to_string(),
Value::String(ds_id.to_string())
).await;
// Also set data_context flag to true since we found the ID
agent.set_state_value("data_context".to_string(), Value::Bool(true)).await;
} else {
// If the tool ran but didn't return an ID (e.g., no datasets found)
tracing::debug!("search_data_catalog tool ran in history but did not return a data_source_id.");
// Optionally clear or set to null if needed, or just leave as is
// agent.set_state_value("data_source_id".to_string(), Value::Null).await;
}
}
Err(e) => {
tracing::warn!(
error = %e,
content = %content,
"Failed to parse search_data_catalog tool output from chat history."
);
}
}
}
}
}
// Helper function to check if assets modified by tools in history were updated externally
// Returns a list of simulated AgentMessages representing the updates.
async fn check_external_asset_updates(
agent: &Arc<Agent>,
messages: &[AgentMessage],
) -> Result<Vec<AgentMessage>> {
let mut tool_history_versions: HashMap<Uuid, i32> = HashMap::new(); // asset_id -> latest version seen in tool history
// First pass: Find the latest version mentioned for each asset in tool history
for message in messages {
if let AgentMessage::Tool {
name: Some(tool_name),
content,
..
} = message
{
if tool_name == "update_metrics"
|| tool_name == "update_dashboards"
|| tool_name == "create_metrics"
|| tool_name == "create_dashboards"
{
// ASSUMPTION: Content is JSON with "files": [{ "id": "...", "version_number": ... }] or similar
// We need to handle both single object responses and array responses
if let Ok(response_val) = serde_json::from_str::<Value>(content) {
let files_to_process = if let Some(files_array) =
response_val.get("files").and_then(|f| f.as_array())
{
// Handle array of files (like create/update tools)
files_array.clone()
} else if response_val.get("id").is_some()
&& response_val.get("version_number").is_some()
{
// Handle single file object (potential alternative response format?)
vec![response_val]
} else {
// No recognizable file data
vec![]
};
for file_data in files_to_process {
if let (Some(id_val), Some(version_val)) =
(file_data.get("id"), file_data.get("version_number"))
// Look for version_number
{
if let (Some(id_str), Some(version_num)) =
(id_val.as_str(), version_val.as_i64())
{
if let Ok(asset_id) = Uuid::parse_str(id_str) {
let entry =
tool_history_versions.entry(asset_id).or_insert(0);
*entry = (*entry).max(version_num as i32);
}
}
}
}
}
}
}
}
if tool_history_versions.is_empty() {
return Ok(vec![]); // No assets modified by tools in history, nothing to check
}
let mut simulated_messages = Vec::new();
let pool = get_pg_pool();
let mut conn = pool.get().await?;
let asset_ids: Vec<Uuid> = tool_history_versions.keys().cloned().collect();
// Query current full records from DB to get version and content
let current_metrics = metric_files::table
.filter(metric_files::id.eq_any(&asset_ids))
.load::<MetricFile>(&mut conn) // Load full MetricFile
.await?;
let current_dashboards = dashboard_files::table
.filter(dashboard_files::id.eq_any(&asset_ids))
.load::<DashboardFile>(&mut conn) // Load full DashboardFile
.await?;
// Combine results for easier iteration
let all_current_assets: HashMap<Uuid, (i32, String)> = current_metrics
.into_iter()
.map(|mf| {
let version = mf.version_history.get_version_number();
let yml = serde_yaml::to_string(&mf.content).unwrap_or_default();
(mf.id, (version, yml))
})
.chain(current_dashboards.into_iter().map(|df| {
let version = df.version_history.get_version_number();
let yml = serde_yaml::to_string(&df.content).unwrap_or_default();
(df.id, (version, yml))
}))
.collect();
// --- Refactored Logic: Collect all modified assets first ---
let mut modified_assets_info: Vec<ModifiedFileInfo> = Vec::new();
for (asset_id, tool_version) in &tool_history_versions {
if let Some((db_version, db_yml_content)) = all_current_assets.get(asset_id) {
// Compare DB version with the latest version seen in tool history
if *db_version > *tool_version {
tracing::warn!(
asset_id = %asset_id,
db_version = %db_version,
tool_version = %tool_version,
"Asset updated externally since last tool call in chat history. Adding to simulated update."
);
modified_assets_info.push(ModifiedFileInfo {
asset_id: *asset_id,
version_number: *db_version,
yml_content: db_yml_content.clone(),
});
}
}
}
// --- If any assets were modified, create ONE simulated call/response pair ---
if !modified_assets_info.is_empty() {
let tool_name = "user_manually_modified_file".to_string();
let modified_ids: Vec<Uuid> = modified_assets_info.iter().map(|i| i.asset_id).collect();
// --- Generate Deterministic, LLM-like IDs ---
// Create a namespace UUID (can be any constant UUID)
let namespace_uuid = Uuid::parse_str("6ba7b810-9dad-11d1-80b4-00c04fd430c8").unwrap();
// Generate UUID v5 based on asset ID and version for determinism
let call_seed = format!("{}-{}", modified_assets_info[0].asset_id, modified_assets_info[0].version_number);
let deterministic_uuid = Uuid::new_v5(&namespace_uuid, call_seed.as_bytes());
// Use the first part of the UUID for the ID string
let id_suffix = deterministic_uuid.simple().to_string()[..27].to_string(); // Adjust length as needed
// 1. ID for the ToolCall (and Assistant message)
let tool_call_id = format!("call_{}", id_suffix);
// 2. ID for the Tool response message itself (make it slightly different)
let tool_response_msg_id = format!("tool_{}", id_suffix);
// --- End ID Generation ---
// --- Create Simulated Tool Call (Params) ---
let params = UserManuallyModifiedFileParams { asset_ids: modified_ids };
let params_json = serde_json::to_string(&params)?;
let assistant_message = AgentMessage::Assistant {
id: Some(tool_call_id.clone()), // Use ToolCall ID for Assistant Message ID
content: None,
tool_calls: Some(vec![ToolCall {
id: tool_call_id.clone(), // Use ID #1 for the ToolCall's ID
call_type: "function".to_string(),
function: FunctionCall {
name: tool_name.clone(),
arguments: params_json,
},
code_interpreter: None,
retrieval: None,
}]),
name: None,
progress: MessageProgress::Complete,
initial: false,
};
simulated_messages.push(assistant_message);
// --- Create Simulated Tool Response (Output) ---
let output = UserManuallyModifiedFileOutput { updated_files: modified_assets_info };
let output_json = serde_json::to_string(&output)?;
let tool_message = AgentMessage::Tool {
tool_call_id: tool_call_id, // Use ID #1 for the ToolCall
name: Some(tool_name),
content: output_json,
id: Some(tool_response_msg_id), // Use ID #2 for the Tool message's ID
progress: MessageProgress::Complete,
};
simulated_messages.push(tool_message);
}
Ok(simulated_messages)
}
}
#[async_trait]
impl ContextLoader for ChatContextLoader {
async fn load_context(
&self,
user: &AuthenticatedUser,
agent: &Arc<Agent>,
) -> Result<Vec<AgentMessage>> {
let mut conn = get_pg_pool().get().await?;
// First verify the chat exists and user has access
let chat = chats::table
.filter(chats::id.eq(self.chat_id))
.filter(chats::created_by.eq(&user.id))
.filter(chats::deleted_at.is_null())
.first::<database::models::Chat>(&mut conn)
.await?;
// Get only the most recent message for the chat
let message = match messages::table
.filter(messages::chat_id.eq(chat.id))
.filter(messages::deleted_at.is_null())
.order_by(messages::created_at.desc())
.first::<database::models::Message>(&mut conn)
.await
{
Ok(message) => message,
Err(diesel::NotFound) => return Ok(vec![]),
Err(e) => return Err(anyhow::anyhow!("Failed to get message: {}", e)),
};
// Convert the single message's history
let mut agent_messages = Vec::new();
let raw_messages =
match serde_json::from_value::<Vec<AgentMessage>>(message.raw_llm_messages) {
Ok(messages) => messages,
Err(e) => {
tracing::error!(
"Failed to parse raw LLM messages for chat {}: {}",
chat.id,
e
);
Vec::new() // Return empty if parsing fails
}
};
// Track seen message IDs to avoid duplicates from potential re-parsing/saving issues
let mut seen_ids: HashSet<String> = HashSet::new();
// Process messages to update context flags and collect unique messages
for agent_message in &raw_messages {
Self::update_context_from_tool_calls(agent, agent_message).await;
if let Some(id) = agent_message.get_id() {
if seen_ids.insert(id.to_string()) {
agent_messages.push(agent_message.clone());
}
} else {
agent_messages.push(agent_message.clone());
}
}
// Check for external updates and get simulated messages
let simulated_update_messages =
match Self::check_external_asset_updates(agent, &raw_messages).await {
Ok(sim_messages) => sim_messages,
Err(e) => {
tracing::error!("Failed to check for external asset updates: {}", e);
Vec::new() // Don't fail, just log and return no simulated messages
}
};
// Append simulated messages, ensuring they haven't been seen before
for sim_message in simulated_update_messages {
if let Some(id) = sim_message.get_id() {
if seen_ids.insert(id.to_string()) {
agent_messages.push(sim_message);
}
} else {
// Should not happen for our simulated messages, but handle defensively
agent_messages.push(sim_message);
}
}
Ok(agent_messages)
}
}

View File

@ -45,6 +45,13 @@ api/target/
api/build/
api/dist/
# Generated by Cargo
# will have compiled files and executables
debug/
target/
build/
dist/
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
Cargo.lock

Some files were not shown because too many files have changed in this diff Show More