buster/packages/ai/tests/tools/unit/modify-dashboards-file-tool...

681 lines
21 KiB
TypeScript
Raw Normal View History

Mastra braintrust (#391) * type fixes * biome clean on ai * add user to flag chat * attempt to get vercel deployed * Update tsup.config.ts * Update pnpm-lock.yaml * Add @buster/server2 Hono API app with Vercel deployment configuration * slack oauth integration * mainly some clean up and biome formatting * slack oauth * slack migration + snapshot * remove unused files * finalized docker image for porter * Create porter_app_buster-server_3155.yml file * Add integration tests for Slack handler and refactor Slack OAuth service - Introduced integration tests for the Slack handler, covering OAuth initiation, callback handling, and integration status retrieval. - Refactored Slack OAuth service to improve error handling and ensure proper integration state management. - Updated token storage implementation to use a database vault instead of Supabase. - Enhanced existing tests for better coverage and reliability, including cleanup of test data. - Added new utility functions for managing vault secrets in the database. * docker image update * new prompts * individual tests and a schema fix * server build * final working dockerfile * Update Dockerfile * new messages to slack messages (#369) * Update dockerfile * Update validate-env.js * update build pipeline * Update the dockerfile flow * finalize logging for pino * stable base * Update cors middleware logger * Update cors.ts * update docker to be more imformative * Update index.ts * Update auth.ts * Update cors.ts * Update cors.ts * Update logger.ts * remove logs * more cors updates * build server shared * Refactor PostgreSQL credentials handling and remove unused memory storage. Update package dependencies. (#370) * tons of file parsing errors (#371) * Refactor PostgreSQL credentials handling and remove unused memory storage. Update package dependencies. * tons of file parsing errors * Dev mode updates * more stable electric handler * Dal/agent-self-healing-fixes (#372) * change to 6 min * optmizations around saving and non-blocking actions. * stream optimizations * Dal/agent-self-healing-fixes (#373) * change to 6 min * optmizations around saving and non-blocking actions. * stream optimizations * change porter staging deploy to mastra-braintrust. * new path for porter deploy * deploy to staging fix * Create porter_app_mastra-braintrust-api_3155.yml file (#375) Co-authored-by: porter-deployment-app[bot] <87230664+porter-deployment-app[bot]@users.noreply.github.com> * Update sizing and opacity * supe up the instance for mastra * environment staging * ssl script * copy path * Update list padding * no throttle and the anthropic cached * move select to the top * Update margin inline start * shrink reasoning vertical space to 2px * semi bold font for headers * update animation timing * haiku * Add createTodoList tool and integrate into create-todos-step * chat helper on post chat * only trigger cicd when change made * Start created streaming text components * Refactor analyst agent task to initialize Braintrust logging asynchronously and parallelize database queries for improved performance. Adjusted cleanup timeout for Braintrust traces to reduce delays. * fixed reasoned for X, so that it rounds down to the minute * Update users page * update build pipeline for new web * document title update * Named chats for page * Datasets titles * Refactor visualization tools and enhance error handling in retryable agent stream. Removed unused metricValueLabel from metrics file tool, updated metric configuration schemas, and improved healing mechanism for tool errors during streaming. * analyst * document title updates * Update useDocumentTitle.tsx * Refactor tool choice configuration in create-todos-step to use structured object. Remove exponential backoff logic from retryable agent stream for healable errors. Introduce new test for real-world healing scenarios in retryable agent stream. * Refactor SQL validation logic in modify-metrics-file-tool to skip unnecessary checks when SQL has not changed. Enhance error handling and update validation messages. Clean up code formatting for improved readability. * update collapse for filecard * chevron collapse * Jacob prompt changes (#376) * prompt changes to improve filtering logic and handle priv/sec errors * prompt changes to make aggregation better and improved filter best practices * Update packages/ai/src/steps/create-todos-step.ts Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * Update packages/ai/src/agents/think-and-prep-agent/think-and-prep-instructions.ts Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * Update packages/ai/src/steps/create-todos-step.ts Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> --------- Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local> Co-authored-by: dal <dallin@buster.so> Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * think and prep * change header and strong fonts weights * Update get collection * combo chart x axis update * Create a chart schemas as types * schema types * simple unit tests for line chart props * fix the response file ordering iwth active selection. * copy around reasoning messages taken care of * fix nullable user message and file processing and such. * update ticks for chart config * fix todo parsing. * app markdown update * Update splitter to use border instead of width * change ml * If no file is found we should auto redirect * Refactor database connection handling to support SSL modes. Introduced functions to extract SSL parameters and manage connections based on SSL requirements, including a custom verifier for unverified connections. * black box message update * chat title updates * optimizations for trigger. * some keepalive logic on the anthropic cached * keep title empty until new one * no duplicate messages * null user message on asset pull * posthog error handling * 20 sec idle timeout on anthropic * null req message * fixed modificiation names missing * Refactor tool call handling to support new content array format in asset messages and context loaders * cache most recent file from workflow * Enhance date and number detection in createDataMetadata function to improve data type handling for metrics files * group hover effect for message * logging for chat * Add messageId handling and file association tracking in dashboard and metrics tools - Updated runtime context to include messageId in create and modify dashboard and metrics file tools. - Implemented file association tracking based on messageId in create and modify functions for both dashboards and metrics. - Ensured type consistency by using AnalystRuntimeContext in runtime context parameters. * logging for chat * message type update * Route to first file instead * trigger moved to catalog * Enhance file selection logic to support YAML parsing and improve logging - Updated `extractMetricIdsFromDashboard` to first attempt JSON parsing, falling back to a regex-based YAML parsing for metric IDs. - Added detailed debug logging in `selectFilesForResponse` to track file selection process, including metrics and dashboards involved. - Introduced tests for various scenarios in `file-selection.test.ts` to ensure correct behavior with dashboard context and edge cases. * trigger dev v4-beta * Retry + Self Healing (#381) * Refactor retry logic in analyst and think-and-prep steps Co-authored-by: dallin <dallin@buster.so> * some fixes * console log error * self healing * todos retry --------- Co-authored-by: Cursor Agent <cursoragent@cursor.com> * remove lots of logs * Remove chat streaming * Remove chat streaming * timeout * Change to updated at field * link to home * Update timeout settings for HTTP and HTTPS agents from 20 seconds to 10 seconds for improved responsiveness. * Add utils module and integrate message conversion in post_chat_handler * Implement error handling for extract values (#382) * Remove chat streaming * Improve error handling and logging in extract values and chat title steps Co-authored-by: dallin <dallin@buster.so> --------- Co-authored-by: Nate Kelley <nate@buster.so> Co-authored-by: Cursor Agent <cursoragent@cursor.com> * loading icon for buster avatar * finalize tooltip cache * upgrade mastra * increase retries * Add redo functionality for chat messages - Introduced `redoFromMessageId` parameter in `handleExistingChat` to allow users to specify a message to redo from. - Implemented validation to ensure the specified message belongs to the current chat. - Added `softDeleteMessagesFromPoint` function to soft delete a message and all subsequent messages in the same chat, facilitating the redo feature. * fix electric potential memory leak * tooltip cache and chart cleanup * Update bullet to be more indented * latest version number * add support endpoint to new server * Fix jank in combo bar charts * index check for dashboard * Collapse only if there are metrics * Is finished reasoing back * Update dependencies and enhance chat message handling - Upgraded `@mastra/core` to version 0.10.8 and added `node-sql-parser` at version 5.3.10 in the lock file. - Improved integration tests for chat message redo functionality, ensuring correct behavior when deriving `chat_id` from `message_id`. - Enhanced error handling and validation in the `initializeChat` function to manage cases where `chat_id` is not provided. * Update pnpm-lock and enhance chat message integration tests - Added `node-sql-parser` version 5.3.10 to dependencies and updated the lock file. - Improved integration tests for chat message redo functionality, ensuring accurate deletion and retrieval of messages. - Enhanced the `initializeChat` function to derive `chat_id` from `message_id` when not provided, improving error handling and validation. * remove .env import breaking build * add updated at to the get chat handler * zmall runtime error fix * permission tests passing * return updated at on the get chat handler now * slq parser fixes * Implement chat access control logic and add comprehensive tests - Developed the `canUserAccessChat` function to determine user access to chats based on direct permissions, collection permissions, creator status, and organizational roles. - Introduced helper functions for checking permissions and retrieving chat information. - Added integration tests to validate access control logic, covering various scenarios including direct permissions, collection permissions, and user roles. - Created unit tests to ensure the correctness of the access control function with mocked database interactions. - Included simple integration tests to verify functionality with existing database data. * sql parser and int tests working. * fix test and lint issues * comment to kick off deployment lo * access controls on datasets * electric context bug fix with sql helpers. * permission and read only * Add lru-cache dependency and export cache management functions - Added `lru-cache` as a dependency in the access-controls package. - Exported new cache management functions from `chats-cached` module, including `canUserAccessChatCached`, `getCacheStats`, `resetCacheStats`, `clearCache`, `invalidateAccess`, `invalidateUserAccess`, and `invalidateChatAccess`. * packages deploy as well * wrong workflow lol * Update AppVerticalCodeSplitter.tsx * Add error handling for query run and SQL save operations Co-authored-by: natemkelley <natemkelley@gmail.com> * Trim whitespace from input values before sending chat prompts Co-authored-by: natemkelley <natemkelley@gmail.com> * type in think-and-prep * use the cached access chat * update package version * new asset import message * Error fallback for login * Update BusterChart.BarChart.stories.tsx * Staging changes to fix number card titles, combo chart axis, and using dynamic filters (#386) Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local> * db init command pass through * combo chart fixes (#387) Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local> * clarifying question and connection logic * pino pretty error fix * clarifying is a finishing tool * change update latest version logic * Update support endpoint * fixes for horizontal bar charts and added the combo chart logic to update metrics (#388) Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local> * permission fix on dashboard metric handlers for workspace and data admin * Add more try catches * Hide avatar is no more * Horizontal bar fixes (#389) * fixes for horizontal bar charts and added the combo chart logic to update metrics * hopefully fixed horizontal bar charts --------- Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local> * reasoning shimmer update * Make the embed flow work with versions * new account warning update * Move support modal * compact number for pie label * Add final reasoning message tracking and workflow start time to chunk processor and related steps - Introduced `finalReasoningMessage` to schemas in `analyst-step`, `mark-message-complete-step`, and `create-todos-step`. - Updated `ChunkProcessor` to calculate and store the final reasoning message based on workflow duration. - Enhanced various steps to utilize the new `workflowStartTime` for better tracking of execution duration. - Improved database update logic to include `finalReasoningMessage` when applicable. * 9 digit cutoff for pie * trigger update * test on mastra braintrust * test deployment * testing * pnpm install * pnpm * node 22 * pnpm version * trigger main * get initial chat file * hono main deploymenbt * clear timeouts * Remove console logs * migration test to staging * db url * try again * k get rid of tls var * hmmm lets try this * mark migrations * fix migration file? * drizzle-kit upgrade * tweaks to the github actions --------- Co-authored-by: Nate Kelley <nate@buster.so> Co-authored-by: porter-deployment-app[bot] <87230664+porter-deployment-app[bot]@users.noreply.github.com> Co-authored-by: Nate Kelley <133379588+nate-kelley-buster@users.noreply.github.com> Co-authored-by: Jacob Anderson <jacobanderson@Jacobs-MacBook-Air.local> Co-authored-by: jacob-buster <jacob@buster.so> Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> Co-authored-by: Cursor Agent <cursoragent@cursor.com> Co-authored-by: natemkelley <natemkelley@gmail.com>
2025-07-03 05:33:40 +08:00
import { describe, expect, test } from 'vitest';
import * as yaml from 'yaml';
import { z } from 'zod';
// Import the schemas we want to test (extracted from the tool file)
const dashboardItemSchema = z.object({
id: z.string().uuid('Must be a valid UUID for an existing metric'),
});
const dashboardRowSchema = z
.object({
id: z.number().int().positive('Row ID must be a positive integer'),
items: z
.array(dashboardItemSchema)
.min(1, 'Each row must have at least 1 item')
.max(4, 'Each row can have at most 4 items'),
columnSizes: z
.array(
z
.number()
.int()
.min(3, 'Each column size must be at least 3')
.max(12, 'Each column size cannot exceed 12')
)
.min(1, 'columnSizes array cannot be empty')
.refine((sizes) => sizes.reduce((sum, size) => sum + size, 0) === 12, {
message: 'Column sizes must sum to exactly 12',
}),
})
.refine((row) => row.items.length === row.columnSizes.length, {
message: 'Number of items must match number of column sizes',
});
const dashboardYmlSchema = z.object({
name: z.string().min(1, 'Dashboard name is required'),
description: z.string().min(1, 'Dashboard description is required'),
rows: z
.array(dashboardRowSchema)
.min(1, 'Dashboard must have at least one row')
.refine(
(rows) => {
const ids = rows.map((row) => row.id);
const uniqueIds = new Set(ids);
return ids.length === uniqueIds.size;
},
{
message: 'All row IDs must be unique',
}
),
});
// Parse and validate dashboard YAML content
function parseAndValidateYaml(ymlContent: string): {
success: boolean;
error?: string;
data?: z.infer<typeof dashboardYmlSchema>;
} {
try {
const parsedYml = yaml.parse(ymlContent);
const validationResult = dashboardYmlSchema.safeParse(parsedYml);
if (!validationResult.success) {
return {
success: false,
error: `Invalid YAML structure: ${validationResult.error.errors.map((e) => `${e.path.join('.')}: ${e.message}`).join(', ')}`,
};
}
return { success: true, data: validationResult.data };
} catch (error) {
return {
success: false,
error: error instanceof Error ? error.message : 'YAML parsing failed',
};
}
}
// Mock metric ID validation function for testing
function validateMetricIds(metricIds: string[]): {
success: boolean;
missingIds?: string[];
error?: string;
} {
// Mock implementation for unit testing
const validUUIDs = [
'f47ac10b-58cc-4372-a567-0e02b2c3d479',
'a47ac10b-58cc-4372-a567-0e02b2c3d480',
'550e8400-e29b-41d4-a716-446655440000',
'6ba7b810-9dad-11d1-80b4-00c04fd430c8',
];
const missingIds = metricIds.filter((id) => !validUUIDs.includes(id));
if (missingIds.length > 0) {
return { success: false, missingIds };
}
return { success: true };
}
describe('Modify Dashboards File Tool Unit Tests', () => {
describe('Dashboard YAML Schema Validation', () => {
test('should validate correct dashboard YAML for modification', () => {
const validDashboardYaml = `
name: Updated Sales Dashboard
description: An updated comprehensive view of sales metrics and performance
rows:
- id: 1
items:
- id: f47ac10b-58cc-4372-a567-0e02b2c3d479
columnSizes:
- 12
`;
const result = parseAndValidateYaml(validDashboardYaml);
expect(result.success).toBe(true);
expect(result.data?.name).toBe('Updated Sales Dashboard');
expect(result.data?.rows).toHaveLength(1);
expect(result.data?.rows?.[0]?.columnSizes).toEqual([12]);
});
test('should validate multi-row dashboard YAML modifications', () => {
const validMultiRowYaml = `
name: Updated Executive Dashboard
description: Enhanced high-level metrics for executive team
rows:
- id: 1
items:
- id: f47ac10b-58cc-4372-a567-0e02b2c3d479
- id: a47ac10b-58cc-4372-a567-0e02b2c3d480
columnSizes:
- 6
- 6
- id: 2
items:
- id: 550e8400-e29b-41d4-a716-446655440000
columnSizes:
- 12
`;
const result = parseAndValidateYaml(validMultiRowYaml);
expect(result.success).toBe(true);
expect(result.data?.rows).toHaveLength(2);
expect(result.data?.rows?.[0]?.items).toHaveLength(2);
expect(result.data?.rows?.[1]?.items).toHaveLength(1);
});
test('should validate dashboard with modified layout to maximum 4 items per row', () => {
const maxItemsYaml = `
name: Updated Detailed Dashboard
description: Modified dashboard with maximum items per row
rows:
- id: 1
items:
- id: f47ac10b-58cc-4372-a567-0e02b2c3d479
- id: a47ac10b-58cc-4372-a567-0e02b2c3d480
- id: 550e8400-e29b-41d4-a716-446655440000
- id: 6ba7b810-9dad-11d1-80b4-00c04fd430c8
columnSizes:
- 3
- 3
- 3
- 3
`;
const result = parseAndValidateYaml(maxItemsYaml);
expect(result.success).toBe(true);
expect(result.data?.rows?.[0]?.items).toHaveLength(4);
expect(result.data?.rows?.[0]?.columnSizes).toEqual([3, 3, 3, 3]);
});
test('should reject modified dashboard with missing required fields', () => {
const missingFieldsYaml = `
name: Incomplete Modified Dashboard
# Missing description and rows
`;
const result = parseAndValidateYaml(missingFieldsYaml);
expect(result.success).toBe(false);
expect(result.error).toContain('Invalid YAML structure');
});
test('should reject modified dashboard with invalid column sizes', () => {
const invalidColumnSizesYaml = `
name: Invalid Modified Dashboard
description: Modified dashboard with invalid column sizes
rows:
- id: 1
items:
- id: f47ac10b-58cc-4372-a567-0e02b2c3d479
- id: a47ac10b-58cc-4372-a567-0e02b2c3d480
columnSizes:
- 4
- 4
`;
const result = parseAndValidateYaml(invalidColumnSizesYaml);
expect(result.success).toBe(false);
expect(result.error).toContain('Column sizes must sum to exactly 12');
});
test('should reject modified dashboard with column size less than 3', () => {
const tooSmallColumnYaml = `
name: Updated Small Column Dashboard
description: Modified dashboard with column size less than 3
rows:
- id: 1
items:
- id: f47ac10b-58cc-4372-a567-0e02b2c3d479
- id: a47ac10b-58cc-4372-a567-0e02b2c3d480
columnSizes:
- 2
- 10
`;
const result = parseAndValidateYaml(tooSmallColumnYaml);
expect(result.success).toBe(false);
expect(result.error).toContain('Each column size must be at least 3');
});
test('should reject modified dashboard with more than 4 items per row', () => {
const tooManyItemsYaml = `
name: Updated Too Many Items Dashboard
description: Modified dashboard with more than 4 items per row
rows:
- id: 1
items:
- id: f47ac10b-58cc-4372-a567-0e02b2c3d479
- id: a47ac10b-58cc-4372-a567-0e02b2c3d480
- id: 550e8400-e29b-41d4-a716-446655440000
- id: 6ba7b810-9dad-11d1-80b4-00c04fd430c8
- id: f47ac10b-58cc-4372-a567-0e02b2c3d555
columnSizes:
- 3
- 3
- 2
- 2
- 2
`;
const result = parseAndValidateYaml(tooManyItemsYaml);
expect(result.success).toBe(false);
expect(result.error).toContain('Each row can have at most 4 items');
});
test('should reject modified dashboard with mismatched items and column sizes', () => {
const mismatchedCountYaml = `
name: Updated Mismatched Count Dashboard
description: Modified dashboard with mismatched items and column sizes
rows:
- id: 1
items:
- id: f47ac10b-58cc-4372-a567-0e02b2c3d479
- id: a47ac10b-58cc-4372-a567-0e02b2c3d480
columnSizes:
- 4
- 4
- 4
`;
const result = parseAndValidateYaml(mismatchedCountYaml);
expect(result.success).toBe(false);
expect(result.error).toContain('Number of items must match number of column sizes');
});
test('should reject modified dashboard with duplicate row IDs', () => {
const duplicateRowIdsYaml = `
name: Updated Duplicate Row IDs Dashboard
description: Modified dashboard with duplicate row IDs
rows:
- id: 1
items:
- id: f47ac10b-58cc-4372-a567-0e02b2c3d479
columnSizes:
- 12
- id: 1
items:
- id: a47ac10b-58cc-4372-a567-0e02b2c3d480
columnSizes:
- 12
`;
const result = parseAndValidateYaml(duplicateRowIdsYaml);
expect(result.success).toBe(false);
expect(result.error).toContain('All row IDs must be unique');
});
test('should reject modified dashboard with invalid metric UUID format', () => {
const invalidUuidYaml = `
name: Updated Invalid UUID Dashboard
description: Modified dashboard with invalid UUID format
rows:
- id: 1
items:
- id: not-a-valid-uuid
columnSizes:
- 12
`;
const result = parseAndValidateYaml(invalidUuidYaml);
expect(result.success).toBe(false);
expect(result.error).toContain('Must be a valid UUID');
});
test('should reject modified dashboard with non-positive row ID', () => {
const invalidRowIdYaml = `
name: Updated Invalid Row ID Dashboard
description: Modified dashboard with invalid row ID
rows:
- id: 0
items:
- id: f47ac10b-58cc-4372-a567-0e02b2c3d479
columnSizes:
- 12
`;
const result = parseAndValidateYaml(invalidRowIdYaml);
expect(result.success).toBe(false);
expect(result.error).toContain('Row ID must be a positive integer');
});
test('should reject modified dashboard with no rows', () => {
const noRowsYaml = `
name: Updated No Rows Dashboard
description: Modified dashboard with no rows
rows: []
`;
const result = parseAndValidateYaml(noRowsYaml);
expect(result.success).toBe(false);
expect(result.error).toContain('Dashboard must have at least one row');
});
});
describe('Metric ID Validation for Modifications', () => {
test('should accept valid metric IDs in modified dashboard', () => {
const validIds = [
'f47ac10b-58cc-4372-a567-0e02b2c3d479',
'a47ac10b-58cc-4372-a567-0e02b2c3d480',
];
const result = validateMetricIds(validIds);
expect(result.success).toBe(true);
});
test('should reject invalid metric IDs in modified dashboard', () => {
const invalidIds = ['f47ac10b-58cc-4372-a567-0e02b2c3d479', 'non-existent-id'];
const result = validateMetricIds(invalidIds);
expect(result.success).toBe(false);
expect(result.missingIds).toEqual(['non-existent-id']);
});
test('should handle empty metric IDs array in modified dashboard', () => {
const result = validateMetricIds([]);
expect(result.success).toBe(true);
});
test('should identify multiple missing IDs in modified dashboard', () => {
const invalidIds = ['missing-1', 'missing-2', 'f47ac10b-58cc-4372-a567-0e02b2c3d479'];
const result = validateMetricIds(invalidIds);
expect(result.success).toBe(false);
expect(result.missingIds).toEqual(['missing-1', 'missing-2']);
});
});
describe('Input Schema Validation for Updates', () => {
test('should validate correct update input format', () => {
const validInput = {
files: [
{
id: 'f47ac10b-58cc-4372-a567-0e02b2c3d479',
yml_content:
'name: Updated Dashboard\\ndescription: Updated Test\\nrows:\\n - id: 1\\n items:\\n - id: f47ac10b-58cc-4372-a567-0e02b2c3d479\\n columnSizes:\\n - 12',
},
],
};
// Basic validation that files array exists and has proper structure
expect(validInput.files).toHaveLength(1);
expect(validInput.files[0]?.id).toBe('f47ac10b-58cc-4372-a567-0e02b2c3d479');
expect(typeof validInput.files[0]?.yml_content).toBe('string');
});
test('should reject empty files array for updates', () => {
const invalidInput = { files: [] };
// This would fail our minimum length validation
expect(invalidInput.files).toHaveLength(0);
});
test('should reject update input without ID', () => {
const invalidInput = {
files: [
{
// Missing id
yml_content: 'name: Updated Test',
},
],
};
expect(invalidInput.files?.[0]).not.toHaveProperty('id');
});
test('should reject update input without yml_content', () => {
const invalidInput = {
files: [
{
id: 'f47ac10b-58cc-4372-a567-0e02b2c3d479',
// Missing yml_content
},
],
};
expect(invalidInput.files?.[0]).not.toHaveProperty('yml_content');
});
test('should validate bulk update input format', () => {
const bulkInput = {
files: [
{
id: 'f47ac10b-58cc-4372-a567-0e02b2c3d479',
yml_content:
'name: First Updated Dashboard\\ndescription: First Update\\nrows:\\n - id: 1\\n items:\\n - id: f47ac10b-58cc-4372-a567-0e02b2c3d479\\n columnSizes:\\n - 12',
},
{
id: 'a47ac10b-58cc-4372-a567-0e02b2c3d480',
yml_content:
'name: Second Updated Dashboard\\ndescription: Second Update\\nrows:\\n - id: 1\\n items:\\n - id: a47ac10b-58cc-4372-a567-0e02b2c3d480\\n columnSizes:\\n - 12',
},
],
};
expect(bulkInput.files).toHaveLength(2);
expect(bulkInput.files.every((f) => f.id && f.yml_content)).toBe(true);
});
test('should reject invalid UUID format in ID field', () => {
const invalidUuidInput = {
files: [
{
id: 'not-a-valid-uuid',
yml_content: 'name: Test Dashboard',
},
],
};
// This would fail UUID validation
expect(invalidUuidInput.files[0]?.id).toBe('not-a-valid-uuid');
});
});
describe('Dashboard Modification Schema Validation', () => {
test('should validate modified single item row', () => {
const singleItemRow = {
id: 1,
items: [{ id: 'f47ac10b-58cc-4372-a567-0e02b2c3d479' }],
columnSizes: [12],
};
const result = dashboardRowSchema.safeParse(singleItemRow);
expect(result.success).toBe(true);
});
test('should validate modified two item row', () => {
const twoItemRow = {
id: 2,
items: [
{ id: 'f47ac10b-58cc-4372-a567-0e02b2c3d479' },
{ id: 'a47ac10b-58cc-4372-a567-0e02b2c3d480' },
],
columnSizes: [6, 6],
};
const result = dashboardRowSchema.safeParse(twoItemRow);
expect(result.success).toBe(true);
});
test('should validate modified three item row', () => {
const threeItemRow = {
id: 3,
items: [
{ id: 'f47ac10b-58cc-4372-a567-0e02b2c3d479' },
{ id: 'a47ac10b-58cc-4372-a567-0e02b2c3d480' },
{ id: '550e8400-e29b-41d4-a716-446655440000' },
],
columnSizes: [4, 4, 4],
};
const result = dashboardRowSchema.safeParse(threeItemRow);
expect(result.success).toBe(true);
});
test('should validate modified four item row', () => {
const fourItemRow = {
id: 4,
items: [
{ id: 'f47ac10b-58cc-4372-a567-0e02b2c3d479' },
{ id: 'a47ac10b-58cc-4372-a567-0e02b2c3d480' },
{ id: '550e8400-e29b-41d4-a716-446655440000' },
{ id: '6ba7b810-9dad-11d1-80b4-00c04fd430c8' },
],
columnSizes: [3, 3, 3, 3],
};
const result = dashboardRowSchema.safeParse(fourItemRow);
expect(result.success).toBe(true);
});
test('should reject modified row with invalid column size sum', () => {
const invalidSumRow = {
id: 1,
items: [{ id: 'f47ac10b-58cc-4372-a567-0e02b2c3d479' }],
columnSizes: [10], // Should be 12
};
const result = dashboardRowSchema.safeParse(invalidSumRow);
expect(result.success).toBe(false);
});
});
describe('Error Message Generation for Modifications', () => {
test('should generate appropriate error message for invalid YAML in modification', () => {
const invalidYaml = 'invalid: yaml: [structure';
const result = parseAndValidateYaml(invalidYaml);
expect(result.success).toBe(false);
expect(result.error).toBeDefined();
expect(typeof result.error).toBe('string');
});
test('should generate appropriate error message for metric validation in modification', () => {
const invalidIds = ['missing-metric-id'];
const result = validateMetricIds(invalidIds);
expect(result.success).toBe(false);
expect(result.missingIds).toEqual(['missing-metric-id']);
});
test('should handle complex validation errors in modification', () => {
const complexInvalidYaml = `
name: Complex Invalid Modified Dashboard
description: Modified dashboard with multiple validation errors
rows:
- id: 1
items:
- id: invalid-uuid
- id: f47ac10b-58cc-4372-a567-0e02b2c3d479
columnSizes:
- 2
- 8
`;
const result = parseAndValidateYaml(complexInvalidYaml);
expect(result.success).toBe(false);
expect(result.error).toContain('Invalid YAML structure');
});
});
describe('Dashboard Item Schema Validation for Modifications', () => {
test('should validate correct dashboard item in modification', () => {
const validItem = {
id: 'f47ac10b-58cc-4372-a567-0e02b2c3d479',
};
const result = dashboardItemSchema.safeParse(validItem);
expect(result.success).toBe(true);
});
test('should reject dashboard item with invalid UUID in modification', () => {
const invalidItem = {
id: 'not-a-uuid',
};
const result = dashboardItemSchema.safeParse(invalidItem);
expect(result.success).toBe(false);
});
test('should reject dashboard item without ID in modification', () => {
const itemWithoutId = {};
const result = dashboardItemSchema.safeParse(itemWithoutId);
expect(result.success).toBe(false);
});
});
describe('Column Size Edge Cases for Modifications', () => {
test('should accept valid column size combinations in modifications', () => {
const validCombinations = [[12], [6, 6], [4, 4, 4], [3, 3, 3, 3], [3, 9], [4, 8], [5, 7]];
for (const columnSizes of validCombinations) {
const sum = columnSizes.reduce((a, b) => a + b, 0);
expect(sum).toBe(12);
const allValid = columnSizes.every((size) => size >= 3 && size <= 12);
expect(allValid).toBe(true);
}
});
test('should reject invalid column size combinations in modifications', () => {
const invalidCombinations = [
[13], // Too large
[11], // Sum not 12
[2, 10], // Size too small
[6, 6, 1], // Size too small, sum not 12
[1, 1, 10], // Sizes too small
[15], // Size too large
];
for (const columnSizes of invalidCombinations) {
const sum = columnSizes.reduce((a, b) => a + b, 0);
const hasInvalidSize = columnSizes.some((size) => size < 3 || size > 12);
const invalidSum = sum !== 12;
expect(hasInvalidSize || invalidSum).toBe(true);
}
});
});
describe('Version History and Modification Context', () => {
test('should handle modification scenarios where name changes', () => {
const originalName = 'Original Dashboard';
const updatedYaml = `
name: Completely Renamed Dashboard
description: This dashboard has been renamed
rows:
- id: 1
items:
- id: f47ac10b-58cc-4372-a567-0e02b2c3d479
columnSizes:
- 12
`;
const result = parseAndValidateYaml(updatedYaml);
expect(result.success).toBe(true);
expect(result.data?.name).toBe('Completely Renamed Dashboard');
expect(result.data?.name).not.toBe(originalName);
});
test('should handle modification scenarios where description changes', () => {
const updatedYaml = `
name: Sales Dashboard
description: Updated and enhanced view of sales metrics with new features
rows:
- id: 1
items:
- id: f47ac10b-58cc-4372-a567-0e02b2c3d479
columnSizes:
- 12
`;
const result = parseAndValidateYaml(updatedYaml);
expect(result.success).toBe(true);
expect(result.data?.description).toContain('Updated and enhanced');
});
test('should handle modification scenarios where row structure changes', () => {
const restructuredYaml = `
name: Restructured Dashboard
description: Dashboard with completely new row structure
rows:
- id: 1
items:
- id: f47ac10b-58cc-4372-a567-0e02b2c3d479
- id: a47ac10b-58cc-4372-a567-0e02b2c3d480
columnSizes:
- 4
- 8
- id: 2
items:
- id: 550e8400-e29b-41d4-a716-446655440000
columnSizes:
- 12
`;
const result = parseAndValidateYaml(restructuredYaml);
expect(result.success).toBe(true);
expect(result.data?.rows).toHaveLength(2);
expect(result.data?.rows?.[0]?.columnSizes).toEqual([4, 8]);
expect(result.data?.rows?.[1]?.columnSizes).toEqual([12]);
});
});
});