updated cursor and claude

This commit is contained in:
dal 2025-04-01 12:13:40 -06:00
parent 7d4aff5802
commit b8fd636740
No known key found for this signature in database
GPG Key ID: 16F4B0E1E9F61122
9 changed files with 2153 additions and 319 deletions

View File

@ -0,0 +1,357 @@
---
description:
globs:
alwaysApply: false
---
# Database Migrations Guide
This document provides a comprehensive guide on how to create and manage database migrations in our project.
## Overview
Database migrations are a way to evolve your database schema over time. Each migration represents a specific change to the database schema, such as creating a table, adding a column, or modifying an enum type. Migrations are version-controlled and can be applied or reverted as needed.
In our project, we use [Diesel](mdc:https:/diesel.rs) for handling database migrations. Diesel is an ORM and query builder for Rust that helps us manage our database schema changes in a safe and consistent way.
## Migration Workflow
### 1. Creating a New Migration
To create a new migration, use the Diesel CLI:
```bash
diesel migration generate name_of_migration
```
This command creates a new directory in the `migrations` folder with a timestamp prefix (e.g., `2025-03-06-232923_name_of_migration`). Inside this directory, two files are created:
- `up.sql`: Contains SQL statements to apply the migration
- `down.sql`: Contains SQL statements to revert the migration
### 2. Writing Migration SQL
#### Up Migration
The `up.sql` file should contain all the SQL statements needed to apply your changes to the database. For example:
```sql
-- Create a new table
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR NOT NULL,
email VARCHAR NOT NULL UNIQUE,
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
);
-- Add a column to an existing table
ALTER TABLE organizations
ADD COLUMN description TEXT;
-- Create an enum type
CREATE TYPE user_role_enum AS ENUM ('admin', 'member', 'guest');
```
#### Down Migration
The `down.sql` file should contain SQL statements that revert the changes made in `up.sql`. It should be written in the reverse order of the operations in `up.sql`:
```sql
-- Remove the enum type
DROP TYPE user_role_enum;
-- Remove the column
ALTER TABLE organizations
DROP COLUMN description;
-- Drop the table
DROP TABLE users;
```
### 3. Running Migrations
To apply all pending migrations:
```bash
diesel migration run
```
This command:
1. Executes the SQL in the `up.sql` files of all pending migrations
2. Updates the `__diesel_schema_migrations` table to track which migrations have been applied
3. Regenerates the `schema.rs` file to reflect the current database schema
### 4. Reverting Migrations
To revert the most recent migration:
```bash
diesel migration revert
```
This executes the SQL in the `down.sql` file of the most recently applied migration.
### 5. Checking Migration Status
To see which migrations have been applied and which are pending:
```bash
diesel migration list
```
## Working with Enums
We prefer using enums when possible for fields with a fixed set of values. Here's how to work with enums in our project:
### 1. Creating an Enum in SQL Migration
```sql
-- In up.sql
CREATE TYPE asset_type_enum AS ENUM ('dashboard', 'dataset', 'metric');
-- In down.sql
DROP TYPE asset_type_enum;
```
### 2. Adding Values to an Existing Enum
```sql
-- In up.sql
ALTER TYPE asset_type_enum ADD VALUE IF NOT EXISTS 'chat';
-- In down.sql
DELETE FROM pg_enum
WHERE enumlabel = 'chat'
AND enumtypid = (SELECT oid FROM pg_type WHERE typname = 'asset_type_enum');
```
### 3. Implementing the Enum in Rust
After running the migration, you need to update the `enums.rs` file to reflect the changes:
```rust
#[derive(
Serialize,
Deserialize,
Debug,
Clone,
Copy,
PartialEq,
Eq,
diesel::AsExpression,
diesel::FromSqlRow,
)]
#[diesel(sql_type = sql_types::AssetTypeEnum)]
#[serde(rename_all = "camelCase")]
pub enum AssetType {
Dashboard,
Dataset,
Metric,
Chat,
}
impl ToSql<sql_types::AssetTypeEnum, Pg> for AssetType {
fn to_sql<'b>(&'b self, out: &mut Output<'b, '_, Pg>) -> serialize::Result {
match *self {
AssetType::Dashboard => out.write_all(b"dashboard")?,
AssetType::Dataset => out.write_all(b"dataset")?,
AssetType::Metric => out.write_all(b"metric")?,
AssetType::Chat => out.write_all(b"chat")?,
}
Ok(IsNull::No)
}
}
impl FromSql<sql_types::AssetTypeEnum, Pg> for AssetType {
fn from_sql(bytes: PgValue<'_>) -> deserialize::Result<Self> {
match bytes.as_bytes() {
b"dashboard" => Ok(AssetType::Dashboard),
b"dataset" => Ok(AssetType::Dataset),
b"metric" => Ok(AssetType::Metric),
b"chat" => Ok(AssetType::Chat),
_ => Err("Unrecognized enum variant".into()),
}
}
}
```
## Working with JSON Types
When working with JSON data in the database, we map it to Rust structs. Here's how:
### 1. Adding a JSON Column in Migration
```sql
-- In up.sql
ALTER TABLE metric_files
ADD COLUMN version_history JSONB NOT NULL DEFAULT '{}'::jsonb;
-- In down.sql
ALTER TABLE metric_files
DROP COLUMN version_history;
```
### 2. Creating a Type for the JSON Data
Create a new file in the `libs/database/src/types` directory or update an existing one:
```rust
// In libs/database/src/types/version_history.rs
use std::io::Write;
use diesel::{
deserialize::FromSql,
pg::Pg,
serialize::{IsNull, Output, ToSql},
sql_types::Jsonb,
AsExpression, FromSqlRow,
};
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, FromSqlRow, AsExpression, Clone)]
#[diesel(sql_type = Jsonb)]
pub struct VersionHistory {
pub version: String,
pub updated_at: String,
pub content: serde_json::Value,
}
impl FromSql<Jsonb, Pg> for VersionHistory {
fn from_sql(bytes: diesel::pg::PgValue) -> diesel::deserialize::Result<Self> {
let value = serde_json::from_value(Jsonb::from_sql(bytes)?)?;
Ok(value)
}
}
impl ToSql<Jsonb, Pg> for VersionHistory {
fn to_sql<'b>(&'b self, out: &mut Output<'b, '_, Pg>) -> diesel::serialize::Result {
let json = serde_json::to_value(self)?;
ToSql::<Jsonb, Pg>::to_sql(&json, out)
}
}
```
### 3. Updating the `mod.rs` File
Make sure to export your new type in the `libs/database/src/types/mod.rs` file:
```rust
pub mod version_history;
pub use version_history::*;
```
### 4. Using the Type in Models
Update the corresponding model in `models.rs` to use your new type:
```rust
#[derive(Queryable, Insertable, Identifiable, Debug, Clone, Serialize)]
#[diesel(table_name = metric_files)]
pub struct MetricFile {
pub id: Uuid,
pub name: String,
pub content: String,
pub organization_id: Uuid,
pub created_by: Uuid,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
pub deleted_at: Option<DateTime<Utc>>,
pub version_history: VersionHistory,
}
```
## Best Practices
1. **Keep migrations small and focused**: Each migration should do one logical change to the schema.
2. **Test migrations before applying to production**: Always test migrations in a development or staging environment first.
3. **Always provide a down migration**: Make sure your `down.sql` properly reverts all changes made in `up.sql`.
4. **Use transactions**: Wrap complex migrations in transactions to ensure atomicity.
5. **Be careful with data migrations**: If you need to migrate data (not just schema), consider using separate migrations or Rust code.
6. **Document your migrations**: Add comments to your SQL files explaining what the migration does and why.
7. **Version control your migrations**: Always commit your migrations to version control.
## Common Migration Patterns
### Adding a New Table
```sql
-- up.sql
CREATE TABLE new_table (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR NOT NULL,
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
);
-- down.sql
DROP TABLE new_table;
```
### Adding a Column
```sql
-- up.sql
ALTER TABLE existing_table
ADD COLUMN new_column VARCHAR;
-- down.sql
ALTER TABLE existing_table
DROP COLUMN new_column;
```
### Creating a Join Table
```sql
-- up.sql
CREATE TABLE table_a_to_table_b (
table_a_id UUID NOT NULL REFERENCES table_a(id),
table_b_id UUID NOT NULL REFERENCES table_b(id),
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
PRIMARY KEY (table_a_id, table_b_id)
);
-- down.sql
DROP TABLE table_a_to_table_b;
```
### Working with Constraints
```sql
-- up.sql
ALTER TABLE users
ADD CONSTRAINT unique_email UNIQUE (email);
-- down.sql
ALTER TABLE users
DROP CONSTRAINT unique_email;
```
## Troubleshooting
### Migration Failed to Apply
If a migration fails to apply, Diesel will stop and not apply any further migrations. You'll need to fix the issue and try again.
### Schema Drift
If your `schema.rs` doesn't match the actual database schema, you can regenerate it:
```bash
diesel print-schema > libs/database/src/schema.rs
```
### Fixing a Bad Migration
If you've applied a migration that has errors:
1. Fix the issues in your `up.sql` file
2. Run `diesel migration revert` to undo the migration
3. Run `diesel migration run` to apply the fixed migration
## Conclusion
Following these guidelines will help maintain a clean and consistent database schema evolution process. Remember that migrations are part of your codebase and should be treated with the same care as any other code.

View File

@ -1,143 +1,70 @@
--- ---
description: These are global rules and recommendations for the rust server. description:
globs: globs:
alwaysApply: true alwaysApply: true
--- ---
# Buster API Repository Navigation Guide
# Global Rules and Project Structure ## Row Limit Implementation Notes
All database query functions in the query_engine library have been updated to respect a 5000 row limit by default. The limit can be overridden by passing an explicit limit value. This is implemented in the libs/query_engine directory.
## Project Overview ## Documentation
This is a Rust web server project built with Axum, focusing on high performance, safety, and maintainability. The project's detailed documentation is in the `/documentation` directory:
- `handlers.mdc` - Handler patterns
- `libs.mdc` - Library construction guidelines
- `rest.mdc` - REST API formatting
- `testing.mdc` - Testing standards
- `tools.mdc` - Tools documentation
- `websockets.mdc` - WebSocket patterns
## Project Structure While these files contain best practices for writing tests, REST patterns, etc., **each subdirectory should have its own README.md or CLAUDE.md** that should be referenced first when working in that specific area. These subdirectory-specific guides often contain implementation details and patterns specific to that component.
- `src/`
- `routes/`
- `rest/` - REST API endpoints using Axum
- `routes/` - Individual route modules
- `ws/` - WebSocket handlers and related functionality
- `database/` - Database models, schema, and connection management
- `main.rs` - Application entry point and server setup
## Implementation ## Repository Structure
When working with prds, you should always mark your progress off in them as you build. - `src/` - Main server code
- `routes/` - API endpoints (REST, WebSocket)
- `utils/` - Shared utilities
- `types/` - Common type definitions
- `libs/` - Shared libraries
- Each lib has its own Cargo.toml and docs
- `migrations/` - Database migrations
- `tests/` - Integration tests
- `documentation/` - Detailed docs
- `prds/` - Product requirements
## Database Connectivity ## Build Commands
- The primary database connection is managed through `get_pg_pool()`, which returns a lazy static `PgPool` - `make dev` - Start development
- Always use this pool for database connections to ensure proper connection management - `make stop` - Stop development
- Example usage: - `cargo test -- --test-threads=1 --nocapture` - Run tests
- `cargo clippy` - Run linter
- `cargo build` - Build project
## Core Guidelines
- Use `anyhow::Result` for error handling
- Group imports (std lib, external, internal)
- Put shared types in `types/`, route-specific types in route files
- Use snake_case for variables/functions, CamelCase for types
- Never log secrets or sensitive data
- All dependencies inherit from workspace using `{ workspace = true }`
- Use database connection pool from `get_pg_pool().get().await?`
- Write tests with `tokio::test` for async tests
## Common Database Pattern
```rust ```rust
let mut conn = get_pg_pool().get().await?; let pool = get_pg_pool();
let mut conn = pool.get().await?;
diesel::update(table)
.filter(conditions)
.set(values)
.execute(&mut conn)
.await?
``` ```
## Code Style and Best Practices ## Common Concurrency Pattern
### References and Memory Management
- Prefer references over owned values when possible
- Avoid unnecessary `.clone()` calls
- Use `&str` instead of `String` for function parameters when the string doesn't need to be owned
### Importing packages/crates
- Please make the dependency as short as possible in the actual logic by importing the crate/package.
### Database Operations
- Use Diesel for database migrations and query building
- Migrations are stored in the `migrations/` directory
### Concurrency Guidelines
- Prioritize concurrent operations, especially for:
- API requests
- File operations
- Optimize database connection usage:
- Batch operations where possible
- Build queries/parameters before executing database operations
- Use bulk inserts/updates instead of individual operations
```rust ```rust
// Preferred: Bulk operation
let items: Vec<_> = prepare_items();
diesel::insert_into(table)
.values(&items)
.execute(conn)?;
// Avoid: Individual operations in a loop
for item in items {
diesel::insert_into(table)
.values(&item)
.execute(conn)?;
}
```
### Error Handling
- Never use `.unwrap()` or `.expect()` in production code
- Always handle errors appropriately using:
- The `?` operator for error propagation
- `match` statements when specific error cases need different handling
- Use `anyhow` for error handling:
- Prefer `anyhow::Result<T>` as the return type for functions that can fail
- Use `anyhow::Error` for error types
- Use `anyhow!` macro for creating custom errors
```rust
use anyhow::{Result, anyhow};
// Example of proper error handling
pub async fn process_data(input: &str) -> Result<Data> {
// Use ? for error propagation
let parsed = parse_input(input)?;
// Use match when specific error cases need different handling
match validate_data(&parsed) {
Ok(valid_data) => Ok(valid_data),
Err(e) => Err(anyhow!("Data validation failed: {}", e))
}
}
// Avoid this:
// let data = parse_input(input).unwrap(); // ❌ Never use unwrap
```
### API Design
- REST endpoints should be in `routes/rest/routes/`
- WebSocket handlers should be in `routes/ws/`
- Use proper HTTP status codes
- Implement proper validation for incoming requests
### Testing
- Write unit tests for critical functionality
- Use integration tests for API endpoints
- Mock external dependencies when appropriate
## Common Patterns
### Database Queries
```rust
use diesel::prelude::*;
// Example of a typical database query
pub async fn get_item(id: i32) -> Result<Item> {
let pool = get_pg_pool();
let conn = pool.get().await?;
items::table
.filter(items::id.eq(id))
.first(&conn)
.map_err(Into::into)
}
```
### Concurrent Operations
```rust
use futures::future::try_join_all;
// Example of concurrent processing
let futures: Vec<_> = items let futures: Vec<_> = items
.into_iter() .into_iter()
.map(|item| process_item(item)) .map(|item| process_item(item))
.collect(); .collect();
let results = try_join_all(futures).await?; let results = try_join_all(futures).await?;
``` ```
Remember to always consider:
1. Connection pool limits when designing concurrent operations
2. Transaction boundaries for data consistency
3. Error propagation and cleanup
4. Memory usage and ownership
5. Please use comments to help document your code and make it more readable.

View File

@ -1,5 +1,5 @@
--- ---
description: This is helpul docs for buildng hanlders in the project.l description: This is helpful documentation for building handlers in the project.
globs: libs/handlers/**/*.rs globs: libs/handlers/**/*.rs
alwaysApply: false alwaysApply: false
--- ---
@ -22,7 +22,6 @@ Handlers are the core business logic components that implement functionality use
- Handler functions should follow the same pattern: `[action]_[resource]_handler` - Handler functions should follow the same pattern: `[action]_[resource]_handler`
- Example: `get_chat_handler()`, `delete_message_handler()` - Example: `get_chat_handler()`, `delete_message_handler()`
- Type definitions should be clear and descriptive - Type definitions should be clear and descriptive
- Request types: `[Action][Resource]Request`
- Response types: `[Action][Resource]Response` - Response types: `[Action][Resource]Response`
## Handler Implementation Guidelines ## Handler Implementation Guidelines
@ -30,15 +29,22 @@ Handlers are the core business logic components that implement functionality use
### Function Signatures ### Function Signatures
```rust ```rust
pub async fn action_resource_handler( pub async fn action_resource_handler(
// Parameters typically include: // Parameters should be decoupled from request types:
request: ActionResourceRequest, // For REST/WS request data resource_id: Uuid, // Individual parameters instead of request objects
user: User, // For authenticated user context options: Vec<String>, // Specific data needed for the operation
user: User, // For authenticated user context
// Other contextual parameters as needed // Other contextual parameters as needed
) -> Result<ActionResourceResponse> { ) -> Result<ActionResourceResponse> {
// Implementation // Implementation
} }
``` ```
### Decoupling from Request Types
- Handlers should NOT take request types as inputs
- Instead, use individual parameters that represent the exact data needed
- This keeps handlers flexible and reusable across different contexts
- The return type can be a specific response type, as this is what the handler produces
### Error Handling ### Error Handling
- Use `anyhow::Result<T>` for return types - Use `anyhow::Result<T>` for return types
- Provide descriptive error messages with context - Provide descriptive error messages with context
@ -59,6 +65,7 @@ match operation() {
### Database Operations ### Database Operations
- Use the connection pool: `get_pg_pool().get().await?` - Use the connection pool: `get_pg_pool().get().await?`
- Run concurrent operations when possible - Run concurrent operations when possible
- For related operations, use sequential operations with error handling
- Handle database-specific errors appropriately - Handle database-specific errors appropriately
- Example: - Example:
```rust ```rust
@ -72,6 +79,24 @@ diesel::update(table)
.await? .await?
``` ```
Example with related operations:
```rust
let pool = get_pg_pool();
let mut conn = pool.get().await?;
// First operation
diesel::insert_into(table1)
.values(&values1)
.execute(&mut conn)
.await?;
// Second related operation
diesel::update(table2)
.filter(conditions)
.set(values2)
.execute(&mut conn)
.await?;
### Concurrency ### Concurrency
- Use `tokio::spawn` for concurrent operations - Use `tokio::spawn` for concurrent operations
- Use `futures::try_join_all` for parallel processing - Use `futures::try_join_all` for parallel processing
@ -109,10 +134,9 @@ tracing::info!(
- Example: - Example:
```rust ```rust
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct ResourceRequest { pub struct ResourceResponse {
pub id: Uuid, pub id: Uuid,
pub name: String, pub name: String,
#[serde(default)]
pub options: Vec<String>, pub options: Vec<String>,
} }
``` ```
@ -125,34 +149,46 @@ pub struct ResourceRequest {
```rust ```rust
// In REST route // In REST route
pub async fn rest_endpoint( pub async fn rest_endpoint(
Json(payload): Json<HandlerRequest>, Json(payload): Json<RestRequest>,
user: User, user: User,
) -> Result<Json<HandlerResponse>, AppError> { ) -> Result<Json<HandlerResponse>, AppError> {
let result = handler::action_resource_handler(payload, user).await?; // Extract specific parameters from the request
let result = handler::action_resource_handler(
payload.id,
payload.options,
user
).await?;
Ok(Json(result)) Ok(Json(result))
} }
// In WebSocket handler // In WebSocket handler
async fn ws_message_handler(message: WsMessage, user: User) -> Result<WsResponse> { async fn ws_message_handler(message: WsMessage, user: User) -> Result<WsResponse> {
let payload: HandlerRequest = serde_json::from_str(&message.payload)?; let payload: WsRequest = serde_json::from_str(&message.payload)?;
let result = handler::action_resource_handler(payload, user).await?; // Extract specific parameters from the request
let result = handler::action_resource_handler(
payload.id,
payload.options,
user
).await?;
Ok(WsResponse::new(result)) Ok(WsResponse::new(result))
} }
``` ```
## CLI Integration ## CLI Integration
- Handler types should be reusable in CLI commands - CLI commands should extract specific parameters from arguments
- CLI commands should use the same handlers as the API when possible - CLI commands should use the same handlers as the API when possible
- Example: - Example:
```rust ```rust
// In CLI command // In CLI command
pub fn cli_command(args: &ArgMatches) -> Result<()> { pub fn cli_command(args: &ArgMatches) -> Result<()> {
let request = HandlerRequest { // Extract parameters from args
// Parse from args let id = Uuid::parse_str(args.value_of("id").unwrap())?;
}; let options = args.values_of("options")
.map(|vals| vals.map(String::from).collect())
.unwrap_or_default();
let result = tokio::runtime::Runtime::new()?.block_on(async { let result = tokio::runtime::Runtime::new()?.block_on(async {
handler::action_resource_handler(request, mock_user()).await handler::action_resource_handler(id, options, mock_user()).await
})?; })?;
println!("{}", serde_json::to_string_pretty(&result)?); println!("{}", serde_json::to_string_pretty(&result)?);
@ -169,11 +205,12 @@ pub fn cli_command(args: &ArgMatches) -> Result<()> {
#[tokio::test] #[tokio::test]
async fn test_action_resource_handler() { async fn test_action_resource_handler() {
// Setup test data // Setup test data
let request = HandlerRequest { /* ... */ }; let id = Uuid::new_v4();
let options = vec!["option1".to_string(), "option2".to_string()];
let user = mock_user(); let user = mock_user();
// Call handler // Call handler
let result = action_resource_handler(request, user).await; let result = action_resource_handler(id, options, user).await;
// Assert expectations // Assert expectations
assert!(result.is_ok()); assert!(result.is_ok());

View File

@ -1,6 +1,6 @@
--- ---
description: This is helpful for building libs for our web server to interact with. description: This is helpful for building libs for our web server to interact with.
globs: libs/**/*.{rs,toml} globs: libs/**/*.{rs
alwaysApply: false alwaysApply: false
--- ---
@ -13,7 +13,7 @@ libs/
│ ├── Cargo.toml # Library-specific manifest │ ├── Cargo.toml # Library-specific manifest
│ ├── src/ │ ├── src/
│ │ ├── lib.rs # Library root │ │ ├── lib.rs # Library root
│ │ ├── types.rs/ # Data structures and types │ │ ├── types.rs # Data structures and types
│ │ ├── utils/ # Utility functions │ │ ├── utils/ # Utility functions
│ │ └── errors.rs # Custom error types │ │ └── errors.rs # Custom error types
│ └── tests/ # Integration tests │ └── tests/ # Integration tests

View File

@ -1,5 +1,5 @@
--- ---
description: This is helpful for building and designing prds for our application and how to write them. Refe description: This is helpful for building and designing PRDs for our application and how to write them.
globs: prds/**/*.md globs: prds/**/*.md
alwaysApply: false alwaysApply: false
--- ---
@ -16,29 +16,107 @@ All PRDs should be stored in the `/prds` directory with the following structure:
/prds /prds
├── template.md # The master template for all PRDs ├── template.md # The master template for all PRDs
├── active/ # Active/In-progress PRDs ├── active/ # Active/In-progress PRDs
│ ├── feature_auth.md │ ├── project_feature_name.md # Project-level PRD
│ └── api_deployment.md │ ├── api_feature_component1.md # Sub-PRD for component 1
│ └── api_feature_component2.md # Sub-PRD for component 2
├── completed/ # Completed PRDs that have been shipped ├── completed/ # Completed PRDs that have been shipped
│ ├── feature_user_auth.md │ ├── project_completed_feature.md
│ └── api_deployment.md │ └── api_completed_component.md
└── archived/ # Archived/Deprecated PRDs └── archived/ # Archived/Deprecated PRDs
``` ```
### Naming Convention ### Naming Convention
- Use snake_case for file names - Use snake_case for file names
- Include a prefix for the type of change: - Include a prefix for the type of change:
- `project_` for project-level PRDs that contain multiple sub-PRDs
- `feature_` for new features - `feature_` for new features
- `enhancement_` for improvements - `enhancement_` for improvements
- `fix_` for bug fixes - `fix_` for bug fixes
- `refactor_` for code refactoring - `refactor_` for code refactoring
- `api_` for API changes - `api_` for API changes
## Project PRDs and Sub-PRDs
### Project PRD Structure
Project PRDs serve as the main document for large features that require multiple components or endpoints. They should:
1. Provide a high-level overview of the entire feature
2. Break down the implementation into logical components
3. Reference individual sub-PRDs for each component
4. Track the status of each sub-PRD
5. Define dependencies between sub-PRDs
Example project PRD sections:
```markdown
## Implementation Plan
The implementation will be broken down into six separate PRDs, each focusing on a specific endpoint:
1. [Add Dashboard to Collections REST Endpoint](mdc:api_add_dashboards_to_collection.md)
2. [Remove Dashboard from Collections REST Endpoint](mdc:api_remove_dashboards_from_collection.md)
3. [Add Metric to Collections REST Endpoint](mdc:api_add_metrics_to_collection.md)
4. [Remove Metric from Collections REST Endpoint](mdc:api_remove_metrics_from_collection.md)
5. [Add Assets to Collection REST Endpoint](mdc:api_add_assets_to_collection.md)
6. [Remove Assets from Collection REST Endpoint](mdc:api_remove_assets_from_collection.md)
```
### Sub-PRD Structure
Sub-PRDs focus on specific components of the larger project. They should:
1. Reference the parent project PRD
2. Focus on detailed implementation of a specific component
3. Include all technical details required for implementation
4. Be independently implementable (when possible)
5. Follow the standard PRD template
### Enabling Concurrent Development
The project PRD and sub-PRD structure is designed to enable efficient concurrent development by:
1. **Clear Component Boundaries**: Each sub-PRD should have well-defined boundaries that minimize overlap with other components.
2. **Explicit Dependencies**: The project PRD should clearly state which sub-PRDs depend on others, allowing teams to plan their work accordingly.
3. **Interface Definitions**: Each sub-PRD should define clear interfaces for how other components interact with it, reducing the risk of integration issues.
4. **Conflict Identification**: The project PRD should identify potential areas of conflict between concurrently developed components and provide strategies to mitigate them.
5. **Integration Strategy**: The project PRD should define how and when components will be integrated, including any feature flag strategies to allow incomplete features to be merged without affecting production.
### Example Workflow
1. **Project Planning**:
- Create the project PRD with a clear breakdown of components
- Define dependencies and development order
- Identify which components can be developed concurrently
2. **Development Kickoff**:
- Begin work on foundation components that others depend on
- Once foundation is complete, start concurrent development of independent components
- Regularly update the project PRD with status changes
3. **Integration**:
- Follow the integration strategy defined in the project PRD
- Address any conflicts that arise during integration
- Update the project PRD with lessons learned
4. **Completion**:
- Move completed PRDs to the `/prds/completed` directory
- Update the project PRD to reflect completion
- Document any deviations from the original plan
## Using the Template ## Using the Template
### Getting Started ### Getting Started
1. Copy [template.md](mdc:prds/template.md) to create a new PRD 1. For a new project with multiple components:
2. Place it in the `/prds/active` directory - Create a project-level PRD using [project_template.md](mdc:prds/project_template.md)
3. Fill out each section following the template's comments and guidelines - Place it in the `/prds/active` directory with prefix `project_`
- Create sub-PRDs for each component using [sub_prd_template.md](mdc:prds/sub_prd_template.md) with appropriate prefixes
2. For a standalone feature:
- Copy [template.md](mdc:prds/template.md) to create a new PRD
- Place it in the `/prds/active` directory
- Fill out each section following the template's guidelines
### Key Sections to Focus On ### Key Sections to Focus On
The template [template.md](mdc:prds/template.md) provides comprehensive sections. Pay special attention to: The template [template.md](mdc:prds/template.md) provides comprehensive sections. Pay special attention to:
@ -59,6 +137,7 @@ The template [template.md](mdc:prds/template.md) provides comprehensive sections
- Include clear success criteria - Include clear success criteria
- List dependencies between phases - List dependencies between phases
- Provide testing strategy for each phase - Provide testing strategy for each phase
- For project PRDs, reference all sub-PRDs with their status
4. **Testing Strategy** 4. **Testing Strategy**
- Unit test requirements - Unit test requirements
@ -66,6 +145,26 @@ The template [template.md](mdc:prds/template.md) provides comprehensive sections
## Best Practices ## Best Practices
### Project PRD Best Practices
1. Keep the project PRD focused on high-level architecture and component relationships
2. Clearly define the scope of each sub-PRD
3. Maintain a status indicator for each sub-PRD ( Complete, ⏳ In Progress, Upcoming)
4. Update the project PRD when sub-PRDs are completed
5. Include a visual representation of component relationships when possible
6. Define clear interfaces between components
7. **Explicitly define the order in which sub-PRDs should be implemented**
8. **Identify which sub-PRDs can be developed concurrently without conflicts**
9. **Document dependencies between sub-PRDs to prevent blocking issues**
10. **Provide strategies for avoiding conflicts during concurrent development**
11. **Establish clear integration points for components developed in parallel**
### Sub-PRD Best Practices
1. Always reference the parent project PRD
2. Focus on detailed implementation of a specific component
3. Include all technical details required for implementation
4. Ensure consistency with other sub-PRDs in the same project
5. Follow the standard PRD template structure
### Documentation ### Documentation
1. Use clear, concise language 1. Use clear, concise language
2. Include code examples where relevant 2. Include code examples where relevant
@ -80,10 +179,10 @@ The template [template.md](mdc:prds/template.md) provides comprehensive sections
- Deprecated PRDs → `/prds/archived` - Deprecated PRDs → `/prds/archived`
2. Update status section regularly: 2. Update status section regularly:
- Completed items - Completed items
- In Progress items - In Progress items
- 🔜 Upcoming items - Upcoming items
- Known Issues - Known Issues
### Review Process ### Review Process
1. Technical review 1. Technical review
@ -106,17 +205,26 @@ The template [template.md](mdc:prds/template.md) provides comprehensive sections
5. No rollback plan 5. No rollback plan
6. Missing security considerations 6. Missing security considerations
7. Undefined monitoring metrics 7. Undefined monitoring metrics
8. Inconsistencies between project PRD and sub-PRDs
9. Overlapping responsibilities between sub-PRDs
10. Missing dependencies between sub-PRDs
## Example PRDs ## Example PRDs
Reference these example PRDs for guidance: Reference these example PRDs for guidance:
[template.md](mdc:prds/template.md) - Project PRD: [Collections REST Endpoints](mdc:prds/active/project_collections_rest_endpoints.md)
- Sub-PRD: [Add Metrics to Collection](mdc:prds/active/api_add_metrics_to_collection.md)
- Project Template: [project_template.md](mdc:prds/project_template.md)
- Sub-PRD Template: [sub_prd_template.md](mdc:prds/sub_prd_template.md)
- Standard Template: [template.md](mdc:prds/template.md)
## Checklist Before Submission ## Checklist Before Submission
- [ ] All template sections completed - [ ] All template sections completed
- [ ] Technical design is detailed and complete - [ ] Technical design is detailed and complete
- [ ] File changes are documented - [ ] File changes are documented
- [ ] Implementation phases are clear (can be as many as you need.) - [ ] Implementation phases are clear (can be as many as you need)
- [ ] Testing strategy is defined - [ ] Testing strategy is defined
- [ ] Security considerations addressed - [ ] Security considerations addressed
- [ ] Dependencies and Files listed - [ ] Dependencies and Files listed
- [ ] File References included - [ ] File References included
- [ ] For project PRDs: all sub-PRDs are referenced with status
- [ ] For sub-PRDs: parent project PRD is referenced

View File

@ -1,5 +1,5 @@
--- ---
description: This rule is helpful for understanding how to build our rest functions. Structure, common patterns, where to look for types, etc.Ï description: This rule is helpful for understanding how to build our REST functions. Structure, common patterns, where to look for types, etc.
globs: src/routes/rest/**/*.rs globs: src/routes/rest/**/*.rs
alwaysApply: false alwaysApply: false
--- ---

File diff suppressed because it is too large Load Diff

View File

@ -12,6 +12,8 @@ The project's detailed documentation is in the `/documentation` directory:
- `tools.mdc` - Tools documentation - `tools.mdc` - Tools documentation
- `websockets.mdc` - WebSocket patterns - `websockets.mdc` - WebSocket patterns
While these files contain best practices for writing tests, REST patterns, etc., **each subdirectory should have its own README.md or CLAUDE.md** that should be referenced first when working in that specific area. These subdirectory-specific guides often contain implementation details and patterns specific to that component.
## Repository Structure ## Repository Structure
- `src/` - Main server code - `src/` - Main server code
- `routes/` - API endpoints (REST, WebSocket) - `routes/` - API endpoints (REST, WebSocket)

View File

@ -1,3 +1,8 @@
---
description: Helpful when making migrations with diesel.rs
globs:
alwaysApply: false
---
# Database Migrations Guide # Database Migrations Guide
This document provides a comprehensive guide on how to create and manage database migrations in our project. This document provides a comprehensive guide on how to create and manage database migrations in our project.
@ -6,7 +11,7 @@ This document provides a comprehensive guide on how to create and manage databas
Database migrations are a way to evolve your database schema over time. Each migration represents a specific change to the database schema, such as creating a table, adding a column, or modifying an enum type. Migrations are version-controlled and can be applied or reverted as needed. Database migrations are a way to evolve your database schema over time. Each migration represents a specific change to the database schema, such as creating a table, adding a column, or modifying an enum type. Migrations are version-controlled and can be applied or reverted as needed.
In our project, we use [Diesel](https://diesel.rs/) for handling database migrations. Diesel is an ORM and query builder for Rust that helps us manage our database schema changes in a safe and consistent way. In our project, we use [Diesel](mdc:https:/diesel.rs) for handling database migrations. Diesel is an ORM and query builder for Rust that helps us manage our database schema changes in a safe and consistent way.
## Migration Workflow ## Migration Workflow