remove old prds

This commit is contained in:
dal 2025-03-20 15:40:42 -06:00
parent 97393aed50
commit 7e20e9bb6a
No known key found for this signature in database
GPG Key ID: 16F4B0E1E9F61122
2 changed files with 0 additions and 396 deletions

View File

@ -1,135 +0,0 @@
# API: Add Assets to GET Collection Response
## Problem Statement
Currently, the GET Collection endpoint (`get_collection_handler.rs`) doesn't populate the `assets` field in the response, even though the `CollectionState` type already contains this field. This is a critical functionality gap as clients need to see what assets (metrics and dashboards) are contained within a collection.
The frontend is expecting a `BusterCollection` type that includes an `assets` array containing `BusterCollectionItemAsset` objects with information from both metric_files and dashboard_files tables.
## Proposed Solution
Modify the GET Collection handler to populate the `assets` field in the response. We'll fetch assets from the collections_to_assets junction table and join with metric_files and dashboard_files tables to get the complete asset information.
## Technical Design
### Code Changes
1. Update the `get_collection_handler.rs` to:
- Query assets from collections_to_assets junction table
- Join with metric_files and dashboard_files tables based on asset_type
- Join with users table to get creator information
- Format the data into CollectionAsset structure
- Include formatted assets in the CollectionState response
### Database Queries
We'll need to execute two queries to fetch both metric and dashboard assets:
```rust
// For metrics
let metric_assets = collections_to_assets::table
.inner_join(metric_files::table.on(metric_files::id.eq(collections_to_assets::asset_id)))
.inner_join(users::table.on(users::id.eq(metric_files::created_by)))
.filter(collections_to_assets::collection_id.eq(req.id))
.filter(collections_to_assets::asset_type.eq(AssetType::Metric))
.filter(collections_to_assets::deleted_at.is_null())
.filter(metric_files::deleted_at.is_null())
.select((
metric_files::id,
metric_files::name,
(users::name, users::email),
metric_files::created_at,
metric_files::updated_at,
collections_to_assets::asset_type,
))
.load::<AssetQueryResult>(&mut conn)
.await?;
// For dashboards
let dashboard_assets = collections_to_assets::table
.inner_join(dashboard_files::table.on(dashboard_files::id.eq(collections_to_assets::asset_id)))
.inner_join(users::table.on(users::id.eq(dashboard_files::created_by)))
.filter(collections_to_assets::collection_id.eq(req.id))
.filter(collections_to_assets::asset_type.eq(AssetType::Dashboard))
.filter(collections_to_assets::deleted_at.is_null())
.filter(dashboard_files::deleted_at.is_null())
.select((
dashboard_files::id,
dashboard_files::name,
(users::name, users::email),
dashboard_files::created_at,
dashboard_files::updated_at,
collections_to_assets::asset_type,
))
.load::<AssetQueryResult>(&mut conn)
.await?;
// Combine results
let combined_assets = [metric_assets, dashboard_assets].concat();
```
### Utility Functions
We'll need a helper function to format the database results into the `CollectionAsset` type:
```rust
fn format_assets(assets: Vec<AssetQueryResult>) -> Vec<CollectionAsset> {
assets
.into_iter()
.map(|(id, name, (user_name, email), created_at, updated_at, asset_type)| {
CollectionAsset {
id,
name,
created_by: AssetUser {
name: user_name,
email,
},
created_at,
updated_at,
asset_type,
}
})
.collect()
}
```
## Implementation Plan
### Phase 1: Update `get_collection_handler.rs`
- Define `AssetQueryResult` type for fetching asset data
- Implement the query logic to fetch assets from both tables
- Add the combined assets to the `CollectionState` response
### Phase 2: Update Tests
- Update existing tests to include assertions for the assets field
- Add test cases with different combinations of metrics and dashboards
## Testing Strategy
### Unit Tests
- Test the handler with collections that have:
- No assets
- Only metrics
- Only dashboards
- Both metrics and dashboards
- Deleted assets (should not be included)
### Integration Tests
- Update integration tests to verify the full API response structure
- Test that assets are correctly formatted and included in the response
## Success Criteria
- GET Collection API returns the assets field in its response
- Assets field contains correctly formatted data from both metric_files and dashboard_files tables
- Frontend can successfully display the assets associated with a collection
## Affected Files
- `libs/handlers/src/collections/get_collection_handler.rs`
- `tests/integration/collections/get_collection_test.rs` (for updated tests)
## Dependencies
- Schema structure for collections_to_assets, metric_files, and dashboard_files tables
- The types for CollectionAsset and AssetUser already exist in types.rs
## Implementation Status
- [x] Update get_collection_handler to fetch assets
- [x] Add appropriate tests
- [ ] Manual verification with frontend

View File

@ -1,261 +0,0 @@
# Metric Dashboard Association PRD
## Problem Statement
Currently, metrics can be added to dashboards through the collections_to_assets junction table which uses generic asset types. However, this approach doesn't capture the specific relationship between dashboard_files and metric_files. When dashboards are created or updated, we need a direct way to track which metrics are used within those dashboards and maintain this association properly.
Key issues:
- No dedicated association table between metric_files and dashboard_files
- Metrics within dashboards are referenced by UUID, but there's no validation of whether these metrics exist
- When modifying dashboards, there's no automatic updating of metric associations
- When creating dashboards through streaming processors, metric associations aren't captured
### Current Limitations
- Dashboard files reference metrics by ID but don't maintain a formal relationship in the database
- No way to efficiently query which metrics are used in a specific dashboard
- No way to efficiently query which dashboards use a specific metric
- When metrics are deleted, dashboards that use them aren't updated
### Impact
- User Impact: Users may encounter broken dashboards if metrics are deleted or modified
- System Impact: Inefficient queries needed to find metric-dashboard relationships
- Business Impact: Poor data integrity and potential orphaned references
## Requirements
### Functional Requirements
#### Core Functionality
- Requirement 1: Create a dedicated junction table to associate metric_files with dashboard_files
- Details: The table will track which metrics are used in which dashboards
- Acceptance Criteria: Junction table allows querying of dashboard→metric and metric→dashboard relationships
- Dependencies: Existing metric_files and dashboard_files tables
- Requirement 2: Update dashboard creation and modification process to maintain associations
- Details: When dashboards are created or updated, update the junction table with metric associations
- Acceptance Criteria: Junction table is kept in sync with dashboard content
- Dependencies: Dashboard creation and update handlers
- Requirement 3: Support streaming dashboard creation with metric associations
- Details: When dashboards are created through streaming processors, update metric associations
- Acceptance Criteria: Streaming-created dashboards have proper metric associations
- Dependencies: Streaming processors for dashboard creation
### Non-Functional Requirements
- Performance Requirements
- Association operations should not significantly impact dashboard save/update performance
- Junction table should have appropriate indexes for fast querying in both directions
- Security Requirements
- Association table should respect existing permissions model
- Maintainability Requirements
- Code changes should be properly tested and documented
## Technical Design
### System Architecture
```mermaid
graph TD
Dashboard[Dashboard File] --> Junction[Dashboard to Metric Junction Table]
Metric[Metric File] --> Junction
UpdateHandler[Update Dashboard Handler] --> Junction
CreateHandler[Create Dashboard Handler] --> Junction
StreamProcessor[Dashboard Stream Processor] --> Junction
```
### Core Components
#### Component 1: Junction Table
```sql
CREATE TABLE metric_files_to_dashboard_files (
metric_file_id UUID NOT NULL REFERENCES metric_files(id),
dashboard_file_id UUID NOT NULL REFERENCES dashboard_files(id),
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
deleted_at TIMESTAMP WITH TIME ZONE,
created_by UUID NOT NULL,
PRIMARY KEY (metric_file_id, dashboard_file_id)
);
```
#### Component 2: Database Model
```rust
#[derive(Queryable, Insertable, Associations, Debug)]
#[diesel(belongs_to(MetricFile, foreign_key = metric_file_id))]
#[diesel(belongs_to(DashboardFile, foreign_key = dashboard_file_id))]
#[diesel(table_name = metric_files_to_dashboard_files)]
pub struct MetricFileToDashboardFile {
pub metric_file_id: Uuid,
pub dashboard_file_id: Uuid,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
pub deleted_at: Option<DateTime<Utc>>,
pub created_by: Uuid,
}
```
#### Component 3: Dashboard Update Handler
```rust
// Update dashboard handler function to extract metric IDs from dashboard content
// and maintain association table entries
pub async fn update_dashboard_handler(
dashboard_id: Uuid,
request: DashboardUpdateRequest,
user_id: &Uuid,
) -> Result<BusterDashboardResponse> {
// Existing handler code...
// Extract metric IDs from dashboard content
let metric_ids = extract_metric_ids_from_dashboard(&dashboard_content);
// Update associations
update_dashboard_metric_associations(
dashboard_id,
metric_ids,
user_id,
&mut conn
).await?;
// Rest of handler...
}
// Helper function to update associations
async fn update_dashboard_metric_associations(
dashboard_id: Uuid,
metric_ids: Vec<Uuid>,
user_id: &Uuid,
conn: &mut AsyncPgConnection,
) -> Result<()> {
// Implementation details
}
```
#### Component 4: Create Dashboard Processor
```rust
// Add to CreateDashboardsProcessor to handle metric associations
impl Processor for CreateDashboardsProcessor {
// Existing implementation...
// Add logic to extract metric IDs and create associations
// when processing dashboard creation
}
```
### Database Changes
```sql
-- Create the junction table
CREATE TABLE metric_files_to_dashboard_files (
metric_file_id UUID NOT NULL REFERENCES metric_files(id),
dashboard_file_id UUID NOT NULL REFERENCES dashboard_files(id),
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
deleted_at TIMESTAMP WITH TIME ZONE,
created_by UUID NOT NULL,
PRIMARY KEY (metric_file_id, dashboard_file_id)
);
-- Add indexes for efficient querying
CREATE INDEX metric_files_to_dashboard_files_metric_id_idx ON metric_files_to_dashboard_files(metric_file_id);
CREATE INDEX metric_files_to_dashboard_files_dashboard_id_idx ON metric_files_to_dashboard_files(dashboard_file_id);
CREATE INDEX metric_files_to_dashboard_files_deleted_at_idx ON metric_files_to_dashboard_files(deleted_at);
```
### File Changes
#### New Files
- `/Users/dallin/dashboard_updates_and_metric_parsing/api/migrations/2025-03-20-XXXXXX_metric_files_to_dashboard_files/up.sql`
- Purpose: Create the junction table between metric_files and dashboard_files
- Key components: Table definition, indexes
- Dependencies: metric_files and dashboard_files tables
- `/Users/dallin/dashboard_updates_and_metric_parsing/api/migrations/2025-03-20-XXXXXX_metric_files_to_dashboard_files/down.sql`
- Purpose: Revert the migration by dropping the table
- Key components: DROP TABLE statement
- Dependencies: None
#### Modified Files
- `/Users/dallin/dashboard_updates_and_metric_parsing/api/libs/database/src/models.rs`
- Changes: Add MetricFileToDashboardFile struct
- Impact: Allows ORM access to the junction table
- Dependencies: Updated schema.rs
- `/Users/dallin/dashboard_updates_and_metric_parsing/api/libs/handlers/src/dashboards/update_dashboard_handler.rs`
- Changes: Add logic to extract metric IDs from dashboard content and update associations
- Impact: Keeps association table in sync with dashboard content
- Dependencies: New model, dashboard parsing logic
- `/Users/dallin/dashboard_updates_and_metric_parsing/api/libs/streaming/src/processors/create_dashboards_processor.rs`
- Changes: Add logic to extract metric IDs from dashboard content and create associations
- Impact: Ensures streaming-created dashboards have proper metric associations
- Dependencies: New model, dashboard parsing logic
## Implementation Plan
### Phase 1: Database Schema and Models
- [x] Create migration for metric_files_to_dashboard_files junction table
- [ ] Run migration to create the table
- [ ] Add MetricFileToDashboardFile model to models.rs
- [ ] Create helper functions to extract metric IDs from dashboard content
### Phase 2: Update Dashboard Handler
- [ ] Add logic to update_dashboard_handler.rs to maintain associations
- [ ] Test update_dashboard_handler with various dashboard configurations
- [ ] Verify associations are properly maintained
### Phase 3: Streaming Dashboard Creation
- [ ] Add logic to create_dashboards_processor.rs to maintain associations
- [ ] Test streaming dashboard creation with metric associations
- [ ] Verify associations are properly maintained
## Testing Strategy
### Unit Tests
```rust
#[cfg(test)]
mod tests {
// Test extracting metric IDs from dashboard content
#[test]
fn test_extract_metric_ids() {
let dashboard_content = // sample dashboard content
let metric_ids = extract_metric_ids_from_dashboard(&dashboard_content);
assert_eq!(metric_ids, vec![/* expected IDs */]);
}
// Test updating associations
#[tokio::test]
async fn test_update_associations() {
// Test setup
let result = update_dashboard_metric_associations(
dashboard_id,
metric_ids,
user_id,
&mut conn
).await;
assert!(result.is_ok());
// Verify associations in database
}
}
```
### Integration Tests
#### Scenario 1: Update Dashboard with New Metrics
- Setup: Create test dashboard and metrics
- Steps:
1. Update dashboard to include metrics
2. Verify association table contains correct entries
- Expected Results: Association table should contain entries for all metrics in the dashboard
- Validation Criteria: Count and content of association table entries matches dashboard content
#### Scenario 2: Stream Create Dashboard with Metrics
- Setup: Prepare dashboard content with metrics
- Steps:
1. Process dashboard creation through streaming processor
2. Verify association table contains correct entries
- Expected Results: Association table should contain entries for all metrics in the dashboard
- Validation Criteria: Count and content of association table entries matches dashboard content
### References
- [Diesel ORM documentation](https://diesel.rs/)
- [Association tables best practices](https://docs.diesel.rs/diesel/associations/index.html)