mirror of https://github.com/buster-so/buster.git
updated cursor and claude
This commit is contained in:
parent
7d4aff5802
commit
b8fd636740
|
@ -0,0 +1,357 @@
|
|||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Database Migrations Guide
|
||||
|
||||
This document provides a comprehensive guide on how to create and manage database migrations in our project.
|
||||
|
||||
## Overview
|
||||
|
||||
Database migrations are a way to evolve your database schema over time. Each migration represents a specific change to the database schema, such as creating a table, adding a column, or modifying an enum type. Migrations are version-controlled and can be applied or reverted as needed.
|
||||
|
||||
In our project, we use [Diesel](mdc:https:/diesel.rs) for handling database migrations. Diesel is an ORM and query builder for Rust that helps us manage our database schema changes in a safe and consistent way.
|
||||
|
||||
## Migration Workflow
|
||||
|
||||
### 1. Creating a New Migration
|
||||
|
||||
To create a new migration, use the Diesel CLI:
|
||||
|
||||
```bash
|
||||
diesel migration generate name_of_migration
|
||||
```
|
||||
|
||||
This command creates a new directory in the `migrations` folder with a timestamp prefix (e.g., `2025-03-06-232923_name_of_migration`). Inside this directory, two files are created:
|
||||
- `up.sql`: Contains SQL statements to apply the migration
|
||||
- `down.sql`: Contains SQL statements to revert the migration
|
||||
|
||||
### 2. Writing Migration SQL
|
||||
|
||||
#### Up Migration
|
||||
|
||||
The `up.sql` file should contain all the SQL statements needed to apply your changes to the database. For example:
|
||||
|
||||
```sql
|
||||
-- Create a new table
|
||||
CREATE TABLE users (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR NOT NULL,
|
||||
email VARCHAR NOT NULL UNIQUE,
|
||||
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Add a column to an existing table
|
||||
ALTER TABLE organizations
|
||||
ADD COLUMN description TEXT;
|
||||
|
||||
-- Create an enum type
|
||||
CREATE TYPE user_role_enum AS ENUM ('admin', 'member', 'guest');
|
||||
```
|
||||
|
||||
#### Down Migration
|
||||
|
||||
The `down.sql` file should contain SQL statements that revert the changes made in `up.sql`. It should be written in the reverse order of the operations in `up.sql`:
|
||||
|
||||
```sql
|
||||
-- Remove the enum type
|
||||
DROP TYPE user_role_enum;
|
||||
|
||||
-- Remove the column
|
||||
ALTER TABLE organizations
|
||||
DROP COLUMN description;
|
||||
|
||||
-- Drop the table
|
||||
DROP TABLE users;
|
||||
```
|
||||
|
||||
### 3. Running Migrations
|
||||
|
||||
To apply all pending migrations:
|
||||
|
||||
```bash
|
||||
diesel migration run
|
||||
```
|
||||
|
||||
This command:
|
||||
1. Executes the SQL in the `up.sql` files of all pending migrations
|
||||
2. Updates the `__diesel_schema_migrations` table to track which migrations have been applied
|
||||
3. Regenerates the `schema.rs` file to reflect the current database schema
|
||||
|
||||
### 4. Reverting Migrations
|
||||
|
||||
To revert the most recent migration:
|
||||
|
||||
```bash
|
||||
diesel migration revert
|
||||
```
|
||||
|
||||
This executes the SQL in the `down.sql` file of the most recently applied migration.
|
||||
|
||||
### 5. Checking Migration Status
|
||||
|
||||
To see which migrations have been applied and which are pending:
|
||||
|
||||
```bash
|
||||
diesel migration list
|
||||
```
|
||||
|
||||
## Working with Enums
|
||||
|
||||
We prefer using enums when possible for fields with a fixed set of values. Here's how to work with enums in our project:
|
||||
|
||||
### 1. Creating an Enum in SQL Migration
|
||||
|
||||
```sql
|
||||
-- In up.sql
|
||||
CREATE TYPE asset_type_enum AS ENUM ('dashboard', 'dataset', 'metric');
|
||||
|
||||
-- In down.sql
|
||||
DROP TYPE asset_type_enum;
|
||||
```
|
||||
|
||||
### 2. Adding Values to an Existing Enum
|
||||
|
||||
```sql
|
||||
-- In up.sql
|
||||
ALTER TYPE asset_type_enum ADD VALUE IF NOT EXISTS 'chat';
|
||||
|
||||
-- In down.sql
|
||||
DELETE FROM pg_enum
|
||||
WHERE enumlabel = 'chat'
|
||||
AND enumtypid = (SELECT oid FROM pg_type WHERE typname = 'asset_type_enum');
|
||||
```
|
||||
|
||||
### 3. Implementing the Enum in Rust
|
||||
|
||||
After running the migration, you need to update the `enums.rs` file to reflect the changes:
|
||||
|
||||
```rust
|
||||
#[derive(
|
||||
Serialize,
|
||||
Deserialize,
|
||||
Debug,
|
||||
Clone,
|
||||
Copy,
|
||||
PartialEq,
|
||||
Eq,
|
||||
diesel::AsExpression,
|
||||
diesel::FromSqlRow,
|
||||
)]
|
||||
#[diesel(sql_type = sql_types::AssetTypeEnum)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub enum AssetType {
|
||||
Dashboard,
|
||||
Dataset,
|
||||
Metric,
|
||||
Chat,
|
||||
}
|
||||
|
||||
impl ToSql<sql_types::AssetTypeEnum, Pg> for AssetType {
|
||||
fn to_sql<'b>(&'b self, out: &mut Output<'b, '_, Pg>) -> serialize::Result {
|
||||
match *self {
|
||||
AssetType::Dashboard => out.write_all(b"dashboard")?,
|
||||
AssetType::Dataset => out.write_all(b"dataset")?,
|
||||
AssetType::Metric => out.write_all(b"metric")?,
|
||||
AssetType::Chat => out.write_all(b"chat")?,
|
||||
}
|
||||
Ok(IsNull::No)
|
||||
}
|
||||
}
|
||||
|
||||
impl FromSql<sql_types::AssetTypeEnum, Pg> for AssetType {
|
||||
fn from_sql(bytes: PgValue<'_>) -> deserialize::Result<Self> {
|
||||
match bytes.as_bytes() {
|
||||
b"dashboard" => Ok(AssetType::Dashboard),
|
||||
b"dataset" => Ok(AssetType::Dataset),
|
||||
b"metric" => Ok(AssetType::Metric),
|
||||
b"chat" => Ok(AssetType::Chat),
|
||||
_ => Err("Unrecognized enum variant".into()),
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Working with JSON Types
|
||||
|
||||
When working with JSON data in the database, we map it to Rust structs. Here's how:
|
||||
|
||||
### 1. Adding a JSON Column in Migration
|
||||
|
||||
```sql
|
||||
-- In up.sql
|
||||
ALTER TABLE metric_files
|
||||
ADD COLUMN version_history JSONB NOT NULL DEFAULT '{}'::jsonb;
|
||||
|
||||
-- In down.sql
|
||||
ALTER TABLE metric_files
|
||||
DROP COLUMN version_history;
|
||||
```
|
||||
|
||||
### 2. Creating a Type for the JSON Data
|
||||
|
||||
Create a new file in the `libs/database/src/types` directory or update an existing one:
|
||||
|
||||
```rust
|
||||
// In libs/database/src/types/version_history.rs
|
||||
use std::io::Write;
|
||||
use diesel::{
|
||||
deserialize::FromSql,
|
||||
pg::Pg,
|
||||
serialize::{IsNull, Output, ToSql},
|
||||
sql_types::Jsonb,
|
||||
AsExpression, FromSqlRow,
|
||||
};
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, FromSqlRow, AsExpression, Clone)]
|
||||
#[diesel(sql_type = Jsonb)]
|
||||
pub struct VersionHistory {
|
||||
pub version: String,
|
||||
pub updated_at: String,
|
||||
pub content: serde_json::Value,
|
||||
}
|
||||
|
||||
impl FromSql<Jsonb, Pg> for VersionHistory {
|
||||
fn from_sql(bytes: diesel::pg::PgValue) -> diesel::deserialize::Result<Self> {
|
||||
let value = serde_json::from_value(Jsonb::from_sql(bytes)?)?;
|
||||
Ok(value)
|
||||
}
|
||||
}
|
||||
|
||||
impl ToSql<Jsonb, Pg> for VersionHistory {
|
||||
fn to_sql<'b>(&'b self, out: &mut Output<'b, '_, Pg>) -> diesel::serialize::Result {
|
||||
let json = serde_json::to_value(self)?;
|
||||
ToSql::<Jsonb, Pg>::to_sql(&json, out)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Updating the `mod.rs` File
|
||||
|
||||
Make sure to export your new type in the `libs/database/src/types/mod.rs` file:
|
||||
|
||||
```rust
|
||||
pub mod version_history;
|
||||
pub use version_history::*;
|
||||
```
|
||||
|
||||
### 4. Using the Type in Models
|
||||
|
||||
Update the corresponding model in `models.rs` to use your new type:
|
||||
|
||||
```rust
|
||||
#[derive(Queryable, Insertable, Identifiable, Debug, Clone, Serialize)]
|
||||
#[diesel(table_name = metric_files)]
|
||||
pub struct MetricFile {
|
||||
pub id: Uuid,
|
||||
pub name: String,
|
||||
pub content: String,
|
||||
pub organization_id: Uuid,
|
||||
pub created_by: Uuid,
|
||||
pub created_at: DateTime<Utc>,
|
||||
pub updated_at: DateTime<Utc>,
|
||||
pub deleted_at: Option<DateTime<Utc>>,
|
||||
pub version_history: VersionHistory,
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Keep migrations small and focused**: Each migration should do one logical change to the schema.
|
||||
|
||||
2. **Test migrations before applying to production**: Always test migrations in a development or staging environment first.
|
||||
|
||||
3. **Always provide a down migration**: Make sure your `down.sql` properly reverts all changes made in `up.sql`.
|
||||
|
||||
4. **Use transactions**: Wrap complex migrations in transactions to ensure atomicity.
|
||||
|
||||
5. **Be careful with data migrations**: If you need to migrate data (not just schema), consider using separate migrations or Rust code.
|
||||
|
||||
6. **Document your migrations**: Add comments to your SQL files explaining what the migration does and why.
|
||||
|
||||
7. **Version control your migrations**: Always commit your migrations to version control.
|
||||
|
||||
## Common Migration Patterns
|
||||
|
||||
### Adding a New Table
|
||||
|
||||
```sql
|
||||
-- up.sql
|
||||
CREATE TABLE new_table (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR NOT NULL,
|
||||
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- down.sql
|
||||
DROP TABLE new_table;
|
||||
```
|
||||
|
||||
### Adding a Column
|
||||
|
||||
```sql
|
||||
-- up.sql
|
||||
ALTER TABLE existing_table
|
||||
ADD COLUMN new_column VARCHAR;
|
||||
|
||||
-- down.sql
|
||||
ALTER TABLE existing_table
|
||||
DROP COLUMN new_column;
|
||||
```
|
||||
|
||||
### Creating a Join Table
|
||||
|
||||
```sql
|
||||
-- up.sql
|
||||
CREATE TABLE table_a_to_table_b (
|
||||
table_a_id UUID NOT NULL REFERENCES table_a(id),
|
||||
table_b_id UUID NOT NULL REFERENCES table_b(id),
|
||||
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
|
||||
PRIMARY KEY (table_a_id, table_b_id)
|
||||
);
|
||||
|
||||
-- down.sql
|
||||
DROP TABLE table_a_to_table_b;
|
||||
```
|
||||
|
||||
### Working with Constraints
|
||||
|
||||
```sql
|
||||
-- up.sql
|
||||
ALTER TABLE users
|
||||
ADD CONSTRAINT unique_email UNIQUE (email);
|
||||
|
||||
-- down.sql
|
||||
ALTER TABLE users
|
||||
DROP CONSTRAINT unique_email;
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Migration Failed to Apply
|
||||
|
||||
If a migration fails to apply, Diesel will stop and not apply any further migrations. You'll need to fix the issue and try again.
|
||||
|
||||
### Schema Drift
|
||||
|
||||
If your `schema.rs` doesn't match the actual database schema, you can regenerate it:
|
||||
|
||||
```bash
|
||||
diesel print-schema > libs/database/src/schema.rs
|
||||
```
|
||||
|
||||
### Fixing a Bad Migration
|
||||
|
||||
If you've applied a migration that has errors:
|
||||
|
||||
1. Fix the issues in your `up.sql` file
|
||||
2. Run `diesel migration revert` to undo the migration
|
||||
3. Run `diesel migration run` to apply the fixed migration
|
||||
|
||||
## Conclusion
|
||||
|
||||
Following these guidelines will help maintain a clean and consistent database schema evolution process. Remember that migrations are part of your codebase and should be treated with the same care as any other code.
|
|
@ -1,143 +1,70 @@
|
|||
---
|
||||
description: These are global rules and recommendations for the rust server.
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: true
|
||||
---
|
||||
# Buster API Repository Navigation Guide
|
||||
|
||||
# Global Rules and Project Structure
|
||||
## Row Limit Implementation Notes
|
||||
All database query functions in the query_engine library have been updated to respect a 5000 row limit by default. The limit can be overridden by passing an explicit limit value. This is implemented in the libs/query_engine directory.
|
||||
|
||||
## Project Overview
|
||||
This is a Rust web server project built with Axum, focusing on high performance, safety, and maintainability.
|
||||
## Documentation
|
||||
The project's detailed documentation is in the `/documentation` directory:
|
||||
- `handlers.mdc` - Handler patterns
|
||||
- `libs.mdc` - Library construction guidelines
|
||||
- `rest.mdc` - REST API formatting
|
||||
- `testing.mdc` - Testing standards
|
||||
- `tools.mdc` - Tools documentation
|
||||
- `websockets.mdc` - WebSocket patterns
|
||||
|
||||
## Project Structure
|
||||
- `src/`
|
||||
- `routes/`
|
||||
- `rest/` - REST API endpoints using Axum
|
||||
- `routes/` - Individual route modules
|
||||
- `ws/` - WebSocket handlers and related functionality
|
||||
- `database/` - Database models, schema, and connection management
|
||||
- `main.rs` - Application entry point and server setup
|
||||
While these files contain best practices for writing tests, REST patterns, etc., **each subdirectory should have its own README.md or CLAUDE.md** that should be referenced first when working in that specific area. These subdirectory-specific guides often contain implementation details and patterns specific to that component.
|
||||
|
||||
## Implementation
|
||||
When working with prds, you should always mark your progress off in them as you build.
|
||||
## Repository Structure
|
||||
- `src/` - Main server code
|
||||
- `routes/` - API endpoints (REST, WebSocket)
|
||||
- `utils/` - Shared utilities
|
||||
- `types/` - Common type definitions
|
||||
- `libs/` - Shared libraries
|
||||
- Each lib has its own Cargo.toml and docs
|
||||
- `migrations/` - Database migrations
|
||||
- `tests/` - Integration tests
|
||||
- `documentation/` - Detailed docs
|
||||
- `prds/` - Product requirements
|
||||
|
||||
## Database Connectivity
|
||||
- The primary database connection is managed through `get_pg_pool()`, which returns a lazy static `PgPool`
|
||||
- Always use this pool for database connections to ensure proper connection management
|
||||
- Example usage:
|
||||
## Build Commands
|
||||
- `make dev` - Start development
|
||||
- `make stop` - Stop development
|
||||
- `cargo test -- --test-threads=1 --nocapture` - Run tests
|
||||
- `cargo clippy` - Run linter
|
||||
- `cargo build` - Build project
|
||||
|
||||
## Core Guidelines
|
||||
- Use `anyhow::Result` for error handling
|
||||
- Group imports (std lib, external, internal)
|
||||
- Put shared types in `types/`, route-specific types in route files
|
||||
- Use snake_case for variables/functions, CamelCase for types
|
||||
- Never log secrets or sensitive data
|
||||
- All dependencies inherit from workspace using `{ workspace = true }`
|
||||
- Use database connection pool from `get_pg_pool().get().await?`
|
||||
- Write tests with `tokio::test` for async tests
|
||||
|
||||
## Common Database Pattern
|
||||
```rust
|
||||
let mut conn = get_pg_pool().get().await?;
|
||||
let pool = get_pg_pool();
|
||||
let mut conn = pool.get().await?;
|
||||
|
||||
diesel::update(table)
|
||||
.filter(conditions)
|
||||
.set(values)
|
||||
.execute(&mut conn)
|
||||
.await?
|
||||
```
|
||||
|
||||
## Code Style and Best Practices
|
||||
|
||||
### References and Memory Management
|
||||
- Prefer references over owned values when possible
|
||||
- Avoid unnecessary `.clone()` calls
|
||||
- Use `&str` instead of `String` for function parameters when the string doesn't need to be owned
|
||||
|
||||
### Importing packages/crates
|
||||
- Please make the dependency as short as possible in the actual logic by importing the crate/package.
|
||||
|
||||
### Database Operations
|
||||
- Use Diesel for database migrations and query building
|
||||
- Migrations are stored in the `migrations/` directory
|
||||
|
||||
### Concurrency Guidelines
|
||||
- Prioritize concurrent operations, especially for:
|
||||
- API requests
|
||||
- File operations
|
||||
- Optimize database connection usage:
|
||||
- Batch operations where possible
|
||||
- Build queries/parameters before executing database operations
|
||||
- Use bulk inserts/updates instead of individual operations
|
||||
## Common Concurrency Pattern
|
||||
```rust
|
||||
// Preferred: Bulk operation
|
||||
let items: Vec<_> = prepare_items();
|
||||
diesel::insert_into(table)
|
||||
.values(&items)
|
||||
.execute(conn)?;
|
||||
|
||||
// Avoid: Individual operations in a loop
|
||||
for item in items {
|
||||
diesel::insert_into(table)
|
||||
.values(&item)
|
||||
.execute(conn)?;
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
- Never use `.unwrap()` or `.expect()` in production code
|
||||
- Always handle errors appropriately using:
|
||||
- The `?` operator for error propagation
|
||||
- `match` statements when specific error cases need different handling
|
||||
- Use `anyhow` for error handling:
|
||||
- Prefer `anyhow::Result<T>` as the return type for functions that can fail
|
||||
- Use `anyhow::Error` for error types
|
||||
- Use `anyhow!` macro for creating custom errors
|
||||
```rust
|
||||
use anyhow::{Result, anyhow};
|
||||
|
||||
// Example of proper error handling
|
||||
pub async fn process_data(input: &str) -> Result<Data> {
|
||||
// Use ? for error propagation
|
||||
let parsed = parse_input(input)?;
|
||||
|
||||
// Use match when specific error cases need different handling
|
||||
match validate_data(&parsed) {
|
||||
Ok(valid_data) => Ok(valid_data),
|
||||
Err(e) => Err(anyhow!("Data validation failed: {}", e))
|
||||
}
|
||||
}
|
||||
|
||||
// Avoid this:
|
||||
// let data = parse_input(input).unwrap(); // ❌ Never use unwrap
|
||||
```
|
||||
|
||||
### API Design
|
||||
- REST endpoints should be in `routes/rest/routes/`
|
||||
- WebSocket handlers should be in `routes/ws/`
|
||||
- Use proper HTTP status codes
|
||||
- Implement proper validation for incoming requests
|
||||
|
||||
### Testing
|
||||
- Write unit tests for critical functionality
|
||||
- Use integration tests for API endpoints
|
||||
- Mock external dependencies when appropriate
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Database Queries
|
||||
```rust
|
||||
use diesel::prelude::*;
|
||||
|
||||
// Example of a typical database query
|
||||
pub async fn get_item(id: i32) -> Result<Item> {
|
||||
let pool = get_pg_pool();
|
||||
let conn = pool.get().await?;
|
||||
|
||||
items::table
|
||||
.filter(items::id.eq(id))
|
||||
.first(&conn)
|
||||
.map_err(Into::into)
|
||||
}
|
||||
```
|
||||
|
||||
### Concurrent Operations
|
||||
```rust
|
||||
use futures::future::try_join_all;
|
||||
|
||||
// Example of concurrent processing
|
||||
let futures: Vec<_> = items
|
||||
.into_iter()
|
||||
.map(|item| process_item(item))
|
||||
.collect();
|
||||
let results = try_join_all(futures).await?;
|
||||
```
|
||||
|
||||
Remember to always consider:
|
||||
1. Connection pool limits when designing concurrent operations
|
||||
2. Transaction boundaries for data consistency
|
||||
3. Error propagation and cleanup
|
||||
4. Memory usage and ownership
|
||||
5. Please use comments to help document your code and make it more readable.
|
||||
```
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
description: This is helpul docs for buildng hanlders in the project.l
|
||||
description: This is helpful documentation for building handlers in the project.
|
||||
globs: libs/handlers/**/*.rs
|
||||
alwaysApply: false
|
||||
---
|
||||
|
@ -22,7 +22,6 @@ Handlers are the core business logic components that implement functionality use
|
|||
- Handler functions should follow the same pattern: `[action]_[resource]_handler`
|
||||
- Example: `get_chat_handler()`, `delete_message_handler()`
|
||||
- Type definitions should be clear and descriptive
|
||||
- Request types: `[Action][Resource]Request`
|
||||
- Response types: `[Action][Resource]Response`
|
||||
|
||||
## Handler Implementation Guidelines
|
||||
|
@ -30,15 +29,22 @@ Handlers are the core business logic components that implement functionality use
|
|||
### Function Signatures
|
||||
```rust
|
||||
pub async fn action_resource_handler(
|
||||
// Parameters typically include:
|
||||
request: ActionResourceRequest, // For REST/WS request data
|
||||
user: User, // For authenticated user context
|
||||
// Parameters should be decoupled from request types:
|
||||
resource_id: Uuid, // Individual parameters instead of request objects
|
||||
options: Vec<String>, // Specific data needed for the operation
|
||||
user: User, // For authenticated user context
|
||||
// Other contextual parameters as needed
|
||||
) -> Result<ActionResourceResponse> {
|
||||
// Implementation
|
||||
}
|
||||
```
|
||||
|
||||
### Decoupling from Request Types
|
||||
- Handlers should NOT take request types as inputs
|
||||
- Instead, use individual parameters that represent the exact data needed
|
||||
- This keeps handlers flexible and reusable across different contexts
|
||||
- The return type can be a specific response type, as this is what the handler produces
|
||||
|
||||
### Error Handling
|
||||
- Use `anyhow::Result<T>` for return types
|
||||
- Provide descriptive error messages with context
|
||||
|
@ -59,6 +65,7 @@ match operation() {
|
|||
### Database Operations
|
||||
- Use the connection pool: `get_pg_pool().get().await?`
|
||||
- Run concurrent operations when possible
|
||||
- For related operations, use sequential operations with error handling
|
||||
- Handle database-specific errors appropriately
|
||||
- Example:
|
||||
```rust
|
||||
|
@ -72,6 +79,24 @@ diesel::update(table)
|
|||
.await?
|
||||
```
|
||||
|
||||
Example with related operations:
|
||||
```rust
|
||||
let pool = get_pg_pool();
|
||||
let mut conn = pool.get().await?;
|
||||
|
||||
// First operation
|
||||
diesel::insert_into(table1)
|
||||
.values(&values1)
|
||||
.execute(&mut conn)
|
||||
.await?;
|
||||
|
||||
// Second related operation
|
||||
diesel::update(table2)
|
||||
.filter(conditions)
|
||||
.set(values2)
|
||||
.execute(&mut conn)
|
||||
.await?;
|
||||
|
||||
### Concurrency
|
||||
- Use `tokio::spawn` for concurrent operations
|
||||
- Use `futures::try_join_all` for parallel processing
|
||||
|
@ -109,10 +134,9 @@ tracing::info!(
|
|||
- Example:
|
||||
```rust
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
pub struct ResourceRequest {
|
||||
pub struct ResourceResponse {
|
||||
pub id: Uuid,
|
||||
pub name: String,
|
||||
#[serde(default)]
|
||||
pub options: Vec<String>,
|
||||
}
|
||||
```
|
||||
|
@ -125,34 +149,46 @@ pub struct ResourceRequest {
|
|||
```rust
|
||||
// In REST route
|
||||
pub async fn rest_endpoint(
|
||||
Json(payload): Json<HandlerRequest>,
|
||||
Json(payload): Json<RestRequest>,
|
||||
user: User,
|
||||
) -> Result<Json<HandlerResponse>, AppError> {
|
||||
let result = handler::action_resource_handler(payload, user).await?;
|
||||
// Extract specific parameters from the request
|
||||
let result = handler::action_resource_handler(
|
||||
payload.id,
|
||||
payload.options,
|
||||
user
|
||||
).await?;
|
||||
Ok(Json(result))
|
||||
}
|
||||
|
||||
// In WebSocket handler
|
||||
async fn ws_message_handler(message: WsMessage, user: User) -> Result<WsResponse> {
|
||||
let payload: HandlerRequest = serde_json::from_str(&message.payload)?;
|
||||
let result = handler::action_resource_handler(payload, user).await?;
|
||||
let payload: WsRequest = serde_json::from_str(&message.payload)?;
|
||||
// Extract specific parameters from the request
|
||||
let result = handler::action_resource_handler(
|
||||
payload.id,
|
||||
payload.options,
|
||||
user
|
||||
).await?;
|
||||
Ok(WsResponse::new(result))
|
||||
}
|
||||
```
|
||||
|
||||
## CLI Integration
|
||||
- Handler types should be reusable in CLI commands
|
||||
- CLI commands should extract specific parameters from arguments
|
||||
- CLI commands should use the same handlers as the API when possible
|
||||
- Example:
|
||||
```rust
|
||||
// In CLI command
|
||||
pub fn cli_command(args: &ArgMatches) -> Result<()> {
|
||||
let request = HandlerRequest {
|
||||
// Parse from args
|
||||
};
|
||||
// Extract parameters from args
|
||||
let id = Uuid::parse_str(args.value_of("id").unwrap())?;
|
||||
let options = args.values_of("options")
|
||||
.map(|vals| vals.map(String::from).collect())
|
||||
.unwrap_or_default();
|
||||
|
||||
let result = tokio::runtime::Runtime::new()?.block_on(async {
|
||||
handler::action_resource_handler(request, mock_user()).await
|
||||
handler::action_resource_handler(id, options, mock_user()).await
|
||||
})?;
|
||||
|
||||
println!("{}", serde_json::to_string_pretty(&result)?);
|
||||
|
@ -169,11 +205,12 @@ pub fn cli_command(args: &ArgMatches) -> Result<()> {
|
|||
#[tokio::test]
|
||||
async fn test_action_resource_handler() {
|
||||
// Setup test data
|
||||
let request = HandlerRequest { /* ... */ };
|
||||
let id = Uuid::new_v4();
|
||||
let options = vec!["option1".to_string(), "option2".to_string()];
|
||||
let user = mock_user();
|
||||
|
||||
// Call handler
|
||||
let result = action_resource_handler(request, user).await;
|
||||
let result = action_resource_handler(id, options, user).await;
|
||||
|
||||
// Assert expectations
|
||||
assert!(result.is_ok());
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
description: This is helpful for building libs for our web server to interact with.
|
||||
globs: libs/**/*.{rs,toml}
|
||||
globs: libs/**/*.{rs
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
|
@ -13,7 +13,7 @@ libs/
|
|||
│ ├── Cargo.toml # Library-specific manifest
|
||||
│ ├── src/
|
||||
│ │ ├── lib.rs # Library root
|
||||
│ │ ├── types.rs/ # Data structures and types
|
||||
│ │ ├── types.rs # Data structures and types
|
||||
│ │ ├── utils/ # Utility functions
|
||||
│ │ └── errors.rs # Custom error types
|
||||
│ └── tests/ # Integration tests
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
description: This is helpful for building and designing prds for our application and how to write them. Refe
|
||||
description: This is helpful for building and designing PRDs for our application and how to write them.
|
||||
globs: prds/**/*.md
|
||||
alwaysApply: false
|
||||
---
|
||||
|
@ -16,29 +16,107 @@ All PRDs should be stored in the `/prds` directory with the following structure:
|
|||
/prds
|
||||
├── template.md # The master template for all PRDs
|
||||
├── active/ # Active/In-progress PRDs
|
||||
│ ├── feature_auth.md
|
||||
│ └── api_deployment.md
|
||||
│ ├── project_feature_name.md # Project-level PRD
|
||||
│ ├── api_feature_component1.md # Sub-PRD for component 1
|
||||
│ └── api_feature_component2.md # Sub-PRD for component 2
|
||||
├── completed/ # Completed PRDs that have been shipped
|
||||
│ ├── feature_user_auth.md
|
||||
│ └── api_deployment.md
|
||||
│ ├── project_completed_feature.md
|
||||
│ └── api_completed_component.md
|
||||
└── archived/ # Archived/Deprecated PRDs
|
||||
```
|
||||
|
||||
### Naming Convention
|
||||
- Use snake_case for file names
|
||||
- Include a prefix for the type of change:
|
||||
- `project_` for project-level PRDs that contain multiple sub-PRDs
|
||||
- `feature_` for new features
|
||||
- `enhancement_` for improvements
|
||||
- `fix_` for bug fixes
|
||||
- `refactor_` for code refactoring
|
||||
- `api_` for API changes
|
||||
|
||||
## Project PRDs and Sub-PRDs
|
||||
|
||||
### Project PRD Structure
|
||||
Project PRDs serve as the main document for large features that require multiple components or endpoints. They should:
|
||||
|
||||
1. Provide a high-level overview of the entire feature
|
||||
2. Break down the implementation into logical components
|
||||
3. Reference individual sub-PRDs for each component
|
||||
4. Track the status of each sub-PRD
|
||||
5. Define dependencies between sub-PRDs
|
||||
|
||||
Example project PRD sections:
|
||||
```markdown
|
||||
## Implementation Plan
|
||||
|
||||
The implementation will be broken down into six separate PRDs, each focusing on a specific endpoint:
|
||||
|
||||
1. [Add Dashboard to Collections REST Endpoint](mdc:api_add_dashboards_to_collection.md)
|
||||
2. [Remove Dashboard from Collections REST Endpoint](mdc:api_remove_dashboards_from_collection.md)
|
||||
3. [Add Metric to Collections REST Endpoint](mdc:api_add_metrics_to_collection.md)
|
||||
4. [Remove Metric from Collections REST Endpoint](mdc:api_remove_metrics_from_collection.md)
|
||||
5. [Add Assets to Collection REST Endpoint](mdc:api_add_assets_to_collection.md)
|
||||
6. [Remove Assets from Collection REST Endpoint](mdc:api_remove_assets_from_collection.md)
|
||||
```
|
||||
|
||||
### Sub-PRD Structure
|
||||
Sub-PRDs focus on specific components of the larger project. They should:
|
||||
|
||||
1. Reference the parent project PRD
|
||||
2. Focus on detailed implementation of a specific component
|
||||
3. Include all technical details required for implementation
|
||||
4. Be independently implementable (when possible)
|
||||
5. Follow the standard PRD template
|
||||
|
||||
### Enabling Concurrent Development
|
||||
|
||||
The project PRD and sub-PRD structure is designed to enable efficient concurrent development by:
|
||||
|
||||
1. **Clear Component Boundaries**: Each sub-PRD should have well-defined boundaries that minimize overlap with other components.
|
||||
|
||||
2. **Explicit Dependencies**: The project PRD should clearly state which sub-PRDs depend on others, allowing teams to plan their work accordingly.
|
||||
|
||||
3. **Interface Definitions**: Each sub-PRD should define clear interfaces for how other components interact with it, reducing the risk of integration issues.
|
||||
|
||||
4. **Conflict Identification**: The project PRD should identify potential areas of conflict between concurrently developed components and provide strategies to mitigate them.
|
||||
|
||||
5. **Integration Strategy**: The project PRD should define how and when components will be integrated, including any feature flag strategies to allow incomplete features to be merged without affecting production.
|
||||
|
||||
### Example Workflow
|
||||
|
||||
1. **Project Planning**:
|
||||
- Create the project PRD with a clear breakdown of components
|
||||
- Define dependencies and development order
|
||||
- Identify which components can be developed concurrently
|
||||
|
||||
2. **Development Kickoff**:
|
||||
- Begin work on foundation components that others depend on
|
||||
- Once foundation is complete, start concurrent development of independent components
|
||||
- Regularly update the project PRD with status changes
|
||||
|
||||
3. **Integration**:
|
||||
- Follow the integration strategy defined in the project PRD
|
||||
- Address any conflicts that arise during integration
|
||||
- Update the project PRD with lessons learned
|
||||
|
||||
4. **Completion**:
|
||||
- Move completed PRDs to the `/prds/completed` directory
|
||||
- Update the project PRD to reflect completion
|
||||
- Document any deviations from the original plan
|
||||
|
||||
## Using the Template
|
||||
|
||||
### Getting Started
|
||||
1. Copy [template.md](mdc:prds/template.md) to create a new PRD
|
||||
2. Place it in the `/prds/active` directory
|
||||
3. Fill out each section following the template's comments and guidelines
|
||||
1. For a new project with multiple components:
|
||||
- Create a project-level PRD using [project_template.md](mdc:prds/project_template.md)
|
||||
- Place it in the `/prds/active` directory with prefix `project_`
|
||||
- Create sub-PRDs for each component using [sub_prd_template.md](mdc:prds/sub_prd_template.md) with appropriate prefixes
|
||||
|
||||
2. For a standalone feature:
|
||||
- Copy [template.md](mdc:prds/template.md) to create a new PRD
|
||||
- Place it in the `/prds/active` directory
|
||||
- Fill out each section following the template's guidelines
|
||||
|
||||
### Key Sections to Focus On
|
||||
The template [template.md](mdc:prds/template.md) provides comprehensive sections. Pay special attention to:
|
||||
|
@ -59,6 +137,7 @@ The template [template.md](mdc:prds/template.md) provides comprehensive sections
|
|||
- Include clear success criteria
|
||||
- List dependencies between phases
|
||||
- Provide testing strategy for each phase
|
||||
- For project PRDs, reference all sub-PRDs with their status
|
||||
|
||||
4. **Testing Strategy**
|
||||
- Unit test requirements
|
||||
|
@ -66,6 +145,26 @@ The template [template.md](mdc:prds/template.md) provides comprehensive sections
|
|||
|
||||
## Best Practices
|
||||
|
||||
### Project PRD Best Practices
|
||||
1. Keep the project PRD focused on high-level architecture and component relationships
|
||||
2. Clearly define the scope of each sub-PRD
|
||||
3. Maintain a status indicator for each sub-PRD ( Complete, ⏳ In Progress, Upcoming)
|
||||
4. Update the project PRD when sub-PRDs are completed
|
||||
5. Include a visual representation of component relationships when possible
|
||||
6. Define clear interfaces between components
|
||||
7. **Explicitly define the order in which sub-PRDs should be implemented**
|
||||
8. **Identify which sub-PRDs can be developed concurrently without conflicts**
|
||||
9. **Document dependencies between sub-PRDs to prevent blocking issues**
|
||||
10. **Provide strategies for avoiding conflicts during concurrent development**
|
||||
11. **Establish clear integration points for components developed in parallel**
|
||||
|
||||
### Sub-PRD Best Practices
|
||||
1. Always reference the parent project PRD
|
||||
2. Focus on detailed implementation of a specific component
|
||||
3. Include all technical details required for implementation
|
||||
4. Ensure consistency with other sub-PRDs in the same project
|
||||
5. Follow the standard PRD template structure
|
||||
|
||||
### Documentation
|
||||
1. Use clear, concise language
|
||||
2. Include code examples where relevant
|
||||
|
@ -80,10 +179,10 @@ The template [template.md](mdc:prds/template.md) provides comprehensive sections
|
|||
- Deprecated PRDs → `/prds/archived`
|
||||
|
||||
2. Update status section regularly:
|
||||
- ✅ Completed items
|
||||
- ⏳ In Progress items
|
||||
- 🔜 Upcoming items
|
||||
- ❌ Known Issues
|
||||
- Completed items
|
||||
- In Progress items
|
||||
- Upcoming items
|
||||
- Known Issues
|
||||
|
||||
### Review Process
|
||||
1. Technical review
|
||||
|
@ -106,17 +205,26 @@ The template [template.md](mdc:prds/template.md) provides comprehensive sections
|
|||
5. No rollback plan
|
||||
6. Missing security considerations
|
||||
7. Undefined monitoring metrics
|
||||
8. Inconsistencies between project PRD and sub-PRDs
|
||||
9. Overlapping responsibilities between sub-PRDs
|
||||
10. Missing dependencies between sub-PRDs
|
||||
|
||||
## Example PRDs
|
||||
Reference these example PRDs for guidance:
|
||||
[template.md](mdc:prds/template.md)
|
||||
- Project PRD: [Collections REST Endpoints](mdc:prds/active/project_collections_rest_endpoints.md)
|
||||
- Sub-PRD: [Add Metrics to Collection](mdc:prds/active/api_add_metrics_to_collection.md)
|
||||
- Project Template: [project_template.md](mdc:prds/project_template.md)
|
||||
- Sub-PRD Template: [sub_prd_template.md](mdc:prds/sub_prd_template.md)
|
||||
- Standard Template: [template.md](mdc:prds/template.md)
|
||||
|
||||
## Checklist Before Submission
|
||||
- [ ] All template sections completed
|
||||
- [ ] Technical design is detailed and complete
|
||||
- [ ] File changes are documented
|
||||
- [ ] Implementation phases are clear (can be as many as you need.)
|
||||
- [ ] Implementation phases are clear (can be as many as you need)
|
||||
- [ ] Testing strategy is defined
|
||||
- [ ] Security considerations addressed
|
||||
- [ ] Dependencies and Files listed
|
||||
- [ ] File References included
|
||||
- [ ] For project PRDs: all sub-PRDs are referenced with status
|
||||
- [ ] For sub-PRDs: parent project PRD is referenced
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
description: This rule is helpful for understanding how to build our rest functions. Structure, common patterns, where to look for types, etc.Ï
|
||||
description: This rule is helpful for understanding how to build our REST functions. Structure, common patterns, where to look for types, etc.
|
||||
globs: src/routes/rest/**/*.rs
|
||||
alwaysApply: false
|
||||
---
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -12,6 +12,8 @@ The project's detailed documentation is in the `/documentation` directory:
|
|||
- `tools.mdc` - Tools documentation
|
||||
- `websockets.mdc` - WebSocket patterns
|
||||
|
||||
While these files contain best practices for writing tests, REST patterns, etc., **each subdirectory should have its own README.md or CLAUDE.md** that should be referenced first when working in that specific area. These subdirectory-specific guides often contain implementation details and patterns specific to that component.
|
||||
|
||||
## Repository Structure
|
||||
- `src/` - Main server code
|
||||
- `routes/` - API endpoints (REST, WebSocket)
|
||||
|
|
|
@ -1,3 +1,8 @@
|
|||
---
|
||||
description: Helpful when making migrations with diesel.rs
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Database Migrations Guide
|
||||
|
||||
This document provides a comprehensive guide on how to create and manage database migrations in our project.
|
||||
|
@ -6,7 +11,7 @@ This document provides a comprehensive guide on how to create and manage databas
|
|||
|
||||
Database migrations are a way to evolve your database schema over time. Each migration represents a specific change to the database schema, such as creating a table, adding a column, or modifying an enum type. Migrations are version-controlled and can be applied or reverted as needed.
|
||||
|
||||
In our project, we use [Diesel](https://diesel.rs/) for handling database migrations. Diesel is an ORM and query builder for Rust that helps us manage our database schema changes in a safe and consistent way.
|
||||
In our project, we use [Diesel](mdc:https:/diesel.rs) for handling database migrations. Diesel is an ORM and query builder for Rust that helps us manage our database schema changes in a safe and consistent way.
|
||||
|
||||
## Migration Workflow
|
||||
|
||||
|
|
Loading…
Reference in New Issue