mirror of https://github.com/buster-so/buster.git
Merge pull request #1169 from buster-so/dallin-bus-1938-report-needs-to-cascade-up-if-metric-modified
sql dialect and tool deltas
This commit is contained in:
commit
f0686fa54e
|
@ -41,10 +41,15 @@ export const SQL_DIALECT_GUIDANCE = {
|
|||
GROUP BY ds.period_date
|
||||
ORDER BY ds.period_date;
|
||||
\`\`\`
|
||||
- **Common Gotchas**:
|
||||
- **Common Gotchas**:
|
||||
- \`NOW()\` returns timestamp with timezone, \`CURRENT_TIMESTAMP\` is standard SQL
|
||||
- String concatenation: Use \`||\` not \`+\`
|
||||
- \`NULL\` comparisons: Use \`IS NULL\`/\`IS NOT NULL\`, never \`= NULL\``,
|
||||
- \`NULL\` comparisons: Use \`IS NULL\`/\`IS NOT NULL\`, never \`= NULL\`
|
||||
|
||||
- **AI/ML and Statistical Capabilities**:
|
||||
- **Native stats**: strong analytical/window functions and aggregates like \`CORR()\`, \`COVAR_POP()\`, \`COVAR_SAMP()\`, \`REGR_SLOPE()\`, \`REGR_INTERCEPT()\`, \`PERCENTILE_CONT()\`, \`PERCENTILE_DISC()\`.
|
||||
- **Forecasting/ML**: no native forecasting/ML/LLM functions.
|
||||
- **Sentiment/LLM**: not built-in.`,
|
||||
|
||||
snowflake: `- **Date/Time Functions (Snowflake)**:
|
||||
- **\`DATE_TRUNC\`**: Similar usage: \`DATE_TRUNC('DAY', column)\`, \`DATE_TRUNC('WEEK', column)\`, \`DATE_TRUNC('MONTH', column)\`. Week start depends on \`WEEK_START\` parameter (default Sunday).
|
||||
|
@ -81,7 +86,11 @@ export const SQL_DIALECT_GUIDANCE = {
|
|||
- **Common Gotchas**:
|
||||
- Column names are case-insensitive but stored uppercase unless quoted
|
||||
- Use \`||\` for string concatenation, not \`+\`
|
||||
- \`GENERATOR()\` is powerful but can consume credits quickly with large row counts`,
|
||||
- \`GENERATOR()\` is powerful but can consume credits quickly with large row counts
|
||||
|
||||
- **AI/ML and LLM (Cortex)**:
|
||||
- **Cortex AI SQL**: \`SNOWFLAKE.CORTEX.COMPLETE()\`, \`SNOWFLAKE.CORTEX.SUMMARIZE()\`, \`SNOWFLAKE.CORTEX.SENTIMENT()\`, \`SNOWFLAKE.CORTEX.TRANSLATE()\`, \`SNOWFLAKE.CORTEX.EXTRACT_ANSWER()\`, \`SNOWFLAKE.CORTEX.SEARCH_PREVIEW()\` for prompts, summarization, sentiment, translation, answer extraction, and search validation in SQL.
|
||||
- **Statistical functions**: native correlation and regression functions available.`,
|
||||
|
||||
bigquery: `- **Date/Time Functions (BigQuery)**:
|
||||
- **\`DATE_TRUNC\`**: \`DATE_TRUNC(column, DAY)\`, \`DATE_TRUNC(column, WEEK)\`, \`DATE_TRUNC(column, MONTH)\`, etc. Week starts Sunday by default, use \`WEEK(MONDAY)\` for Monday start.
|
||||
|
@ -125,7 +134,11 @@ export const SQL_DIALECT_GUIDANCE = {
|
|||
- **Common Gotchas**:
|
||||
- Table names must be quoted with backticks: \`project.dataset.table\`
|
||||
- String concatenation with \`CONCAT()\` function, not \`||\`
|
||||
- Slots are the currency - optimize for slot usage, not just time`,
|
||||
- Slots are the currency - optimize for slot usage, not just time
|
||||
|
||||
- **AI/ML**:
|
||||
- **Native statistical functions**: \`CORR\`, \`COVAR_POP\`, \`APPROX_*\` for scalable analytics.
|
||||
- **Forecasting**: \`ML.FORECAST\` function available for time series predictions on existing models.`,
|
||||
|
||||
redshift: `- **Date/Time Functions (Redshift)**:
|
||||
- **\`DATE_TRUNC\`**: Similar to PostgreSQL: \`DATE_TRUNC('day', column)\`, \`DATE_TRUNC('week', column)\`, \`DATE_TRUNC('month', column)\`. Week starts Monday.
|
||||
|
@ -167,9 +180,13 @@ export const SQL_DIALECT_GUIDANCE = {
|
|||
\`\`\`
|
||||
- **Common Gotchas**:
|
||||
- No support for arrays or complex data types
|
||||
- Limited regex support compared to PostgreSQL
|
||||
- Limited regex support compared to PostgreSQL
|
||||
- Case-sensitive string comparisons by default
|
||||
- \`LIMIT\` without \`ORDER BY\` returns unpredictable results`,
|
||||
- \`LIMIT\` without \`ORDER BY\` returns unpredictable results
|
||||
|
||||
- **AI/ML**:
|
||||
- **Statistical functions**: built-in \`CORR\`, \`COVAR_*\`, \`REGR_*\` for analysis.
|
||||
- **Prediction**: \`PREDICT\` function available for inference on trained models.`,
|
||||
|
||||
mysql: `- **Date/Time Functions (MySQL/MariaDB)**:
|
||||
- **\`DATE_FORMAT\`**: Use \`DATE_FORMAT(column, '%Y-%m-01')\` for month truncation. For week, use \`STR_TO_DATE(CONCAT(YEAR(column),'-',WEEK(column, 1),' Monday'), '%X-%V %W')\` (Mode 1 starts week on Monday).
|
||||
|
@ -206,7 +223,11 @@ export const SQL_DIALECT_GUIDANCE = {
|
|||
- \`LIMIT\` without \`ORDER BY\` returns unpredictable results
|
||||
- String comparison is case-insensitive by default (depends on collation)
|
||||
- Use \`CONCAT()\` for string concatenation, not \`+\`
|
||||
- \`GROUP BY\` behavior differs from standard SQL (sql_mode affects this)`,
|
||||
- \`GROUP BY\` behavior differs from standard SQL (sql_mode affects this)
|
||||
|
||||
- **AI/ML**:
|
||||
- **HeatWave AutoML (Enterprise/HeatWave)**: PREDICT function available for inference on trained models (requires existing models).
|
||||
- **Community MySQL**: no native ML/LLM functions.`,
|
||||
|
||||
mariadb: `- **Date/Time Functions (MySQL/MariaDB)**:
|
||||
- **\`DATE_FORMAT\`**: Use \`DATE_FORMAT(column, '%Y-%m-01')\` for month truncation. For week, use \`STR_TO_DATE(CONCAT(YEAR(column),'-',WEEK(column, 1),' Monday'), '%X-%V %W')\` (Mode 1 starts week on Monday).
|
||||
|
@ -239,7 +260,11 @@ export const SQL_DIALECT_GUIDANCE = {
|
|||
GROUP BY ds.period_date
|
||||
ORDER BY ds.period_date;
|
||||
\`\`\`
|
||||
- **Common Gotchas**: Generally more standards-compliant than MySQL, but same basic patterns apply`,
|
||||
- **Common Gotchas**: Generally more standards-compliant than MySQL, but same basic patterns apply
|
||||
|
||||
- **AI/ML**:
|
||||
- No built-in ML/LLM/forecasting in core MariaDB.
|
||||
- **Statistical functions**: basic correlation and regression functions available.`,
|
||||
|
||||
sqlserver: `- **Date/Time Functions (SQL Server)**:
|
||||
- **\`DATE_TRUNC\`**: Available in recent versions: \`DATE_TRUNC('day', column)\`, \`DATE_TRUNC('week', column)\`, \`DATE_TRUNC('month', column)\`. Week start depends on \`DATEFIRST\` setting.
|
||||
|
@ -276,7 +301,11 @@ export const SQL_DIALECT_GUIDANCE = {
|
|||
- Square bracket notation for reserved words: \`[order]\`, \`[user]\`
|
||||
- String concatenation with \`+\` can return NULL if any operand is NULL
|
||||
- \`ISNULL()\` function vs \`IS NULL\` condition
|
||||
- \`TOP\` clause requires \`ORDER BY\` for deterministic results`,
|
||||
- \`TOP\` clause requires \`ORDER BY\` for deterministic results
|
||||
|
||||
- **AI/ML**:
|
||||
- **Native prediction**: \`PREDICT\` function available for inference on trained ONNX models.
|
||||
- **Statistical functions**: rich window/analytic functions; correlation functions available.`,
|
||||
|
||||
databricks: `- **Date/Time Functions (Databricks SQL)**:
|
||||
- **\`DATE_TRUNC\`**: \`DATE_TRUNC('DAY', column)\`, \`DATE_TRUNC('WEEK', column)\`, \`DATE_TRUNC('MONTH', column)\`. Week starts Monday.
|
||||
|
@ -312,7 +341,11 @@ export const SQL_DIALECT_GUIDANCE = {
|
|||
- Case-sensitive column names by default (unlike some SQL dialects)
|
||||
- Use \`concat()\` function for string concatenation, not \`||\` or \`+\`
|
||||
- \`sequence()\` function is powerful but can be memory-intensive for large ranges
|
||||
- Delta Lake tables require explicit \`REFRESH TABLE\` after external writes`,
|
||||
- Delta Lake tables require explicit \`REFRESH TABLE\` after external writes
|
||||
|
||||
- **AI/ML and LLM**:
|
||||
- **AI functions (Databricks SQL)**: \`ai_generate_text()\`, \`ai_summarize()\`, \`ai_translate()\`, \`ai_analyze_sentiment()\` provide LLM/NLP in SQL.
|
||||
- **Model prediction**: \`PREDICT\` function available for inference on registered models.`,
|
||||
|
||||
supabase: `- **Date/Time Functions (PostgreSQL/Supabase)**:
|
||||
- **\`DATE_TRUNC\`**: Prefer \`DATE_TRUNC('day', column)\`, \`DATE_TRUNC('week', column)\`, \`DATE_TRUNC('month', column)\`, etc., for grouping time series data. Note that \`'week'\` starts on Monday.
|
||||
|
@ -341,7 +374,11 @@ export const SQL_DIALECT_GUIDANCE = {
|
|||
LEFT JOIN schema.transactions t ON DATE_TRUNC('month', t.date) = ds.period_date
|
||||
GROUP BY ds.period_date
|
||||
ORDER BY ds.period_date;
|
||||
\`\`\``,
|
||||
\`\`\`
|
||||
|
||||
- **AI/ML Notes (Supabase)**:
|
||||
- Supabase is PostgreSQL: leverage the same native capabilities and statistical functions.
|
||||
- Extension availability and compute limits vary by project plan—validate before relying on heavy analytics`,
|
||||
} as const;
|
||||
|
||||
export type SqlDialect = keyof typeof SQL_DIALECT_GUIDANCE;
|
||||
|
|
|
@ -1,4 +1,14 @@
|
|||
import { type UpdateMessageEntriesParams, updateMessageEntries } from '@buster/database/queries';
|
||||
import {
|
||||
type UpdateMessageEntriesParams,
|
||||
getAssetLatestVersion,
|
||||
updateChat,
|
||||
updateMessage,
|
||||
updateMessageEntries,
|
||||
} from '@buster/database/queries';
|
||||
import {
|
||||
type ResponseMessageFileType,
|
||||
ResponseMessageFileTypeSchema,
|
||||
} from '@buster/database/schema-types';
|
||||
import type { ToolCallOptions } from 'ai';
|
||||
import {
|
||||
OptimisticJsonParser,
|
||||
|
@ -13,6 +23,7 @@ import {
|
|||
// Type-safe key extraction from the schema - will cause compile error if field name changes
|
||||
// Using keyof with the inferred type ensures we're using the actual schema keys
|
||||
const FINAL_RESPONSE_KEY = 'finalResponse' as const satisfies keyof DoneToolInput;
|
||||
const ASSETS_TO_RETURN_KEY = 'assetsToReturn' as const satisfies keyof DoneToolInput;
|
||||
|
||||
export function createDoneToolDelta(context: DoneToolContext, doneToolState: DoneToolState) {
|
||||
return async function doneToolDelta(
|
||||
|
@ -30,7 +41,145 @@ export function createDoneToolDelta(context: DoneToolContext, doneToolState: Don
|
|||
FINAL_RESPONSE_KEY
|
||||
);
|
||||
|
||||
// Extract assetsToReturn; can be full array or stringified
|
||||
const rawAssets = getOptimisticValue<unknown>(
|
||||
parseResult.extractedValues,
|
||||
ASSETS_TO_RETURN_KEY,
|
||||
[]
|
||||
);
|
||||
|
||||
type AssetToReturn = {
|
||||
assetId: string;
|
||||
assetName: string;
|
||||
assetType: ResponseMessageFileType;
|
||||
};
|
||||
|
||||
function isAssetToReturn(value: unknown): value is AssetToReturn {
|
||||
if (!value || typeof value !== 'object') return false;
|
||||
const obj = value as Record<string, unknown>;
|
||||
const idOk = typeof obj.assetId === 'string';
|
||||
const nameOk = typeof obj.assetName === 'string';
|
||||
const typeVal = obj.assetType;
|
||||
const typeOk =
|
||||
typeof typeVal === 'string' &&
|
||||
ResponseMessageFileTypeSchema.options.includes(typeVal as ResponseMessageFileType);
|
||||
return idOk && nameOk && typeOk;
|
||||
}
|
||||
|
||||
let assetsToInsert: AssetToReturn[] = [];
|
||||
if (Array.isArray(rawAssets)) {
|
||||
assetsToInsert = rawAssets.filter(isAssetToReturn);
|
||||
} else if (typeof rawAssets === 'string') {
|
||||
try {
|
||||
const parsed: unknown = JSON.parse(rawAssets);
|
||||
if (Array.isArray(parsed)) {
|
||||
assetsToInsert = parsed.filter(isAssetToReturn);
|
||||
}
|
||||
} catch {
|
||||
// ignore malformed JSON until more delta arrives
|
||||
}
|
||||
}
|
||||
|
||||
// Insert any newly completed asset items as response messages (dedupe via state)
|
||||
// Note: Reports are not added as file response messages, only non-report assets
|
||||
if (assetsToInsert.length > 0 && context.messageId) {
|
||||
const alreadyAdded = new Set(doneToolState.addedAssetIds || []);
|
||||
const newAssets = assetsToInsert.filter((a) => !alreadyAdded.has(a.assetId));
|
||||
|
||||
// Filter out report_file assets from file responses - they don't get response messages
|
||||
const nonReportAssets = newAssets.filter((a) => a.assetType !== 'report_file');
|
||||
|
||||
if (nonReportAssets.length > 0) {
|
||||
const fileResponses = nonReportAssets.map((a) => ({
|
||||
id: a.assetId,
|
||||
type: 'file' as const,
|
||||
file_type: a.assetType,
|
||||
file_name: a.assetName,
|
||||
version_number: 1,
|
||||
filter_version_id: null,
|
||||
metadata: [
|
||||
{
|
||||
status: 'completed' as const,
|
||||
message: `Added ${a.assetType.replace('_file', '')} to response`,
|
||||
timestamp: Date.now(),
|
||||
},
|
||||
],
|
||||
}));
|
||||
|
||||
// Upsert file messages alone to ensure they appear before the final text
|
||||
const entriesForAssets: UpdateMessageEntriesParams = {
|
||||
messageId: context.messageId,
|
||||
responseMessages: fileResponses,
|
||||
};
|
||||
|
||||
try {
|
||||
await updateMessageEntries(entriesForAssets);
|
||||
// Update state to prevent duplicates on next deltas
|
||||
doneToolState.addedAssetIds = [
|
||||
...(doneToolState.addedAssetIds || []),
|
||||
...nonReportAssets.map((a) => a.assetId),
|
||||
];
|
||||
} catch (error) {
|
||||
console.error('[done-tool] Failed to add asset response entries from delta:', error);
|
||||
}
|
||||
}
|
||||
|
||||
// Store ALL assets (including reports) for chat update later
|
||||
if (newAssets.length > 0) {
|
||||
doneToolState.addedAssets = [
|
||||
...(doneToolState.addedAssets || []),
|
||||
...newAssets.map((a) => ({ assetId: a.assetId, assetType: a.assetType })),
|
||||
];
|
||||
}
|
||||
}
|
||||
|
||||
if (finalResponse !== undefined && finalResponse !== '') {
|
||||
// Mark final reasoning now (after assets have been handled above) and before text streams
|
||||
if (context.messageId) {
|
||||
try {
|
||||
const currentTime = Date.now();
|
||||
const elapsedTimeMs = currentTime - context.workflowStartTime;
|
||||
const elapsedSeconds = Math.floor(elapsedTimeMs / 1000);
|
||||
|
||||
let timeString: string;
|
||||
if (elapsedSeconds < 60) {
|
||||
timeString = `${elapsedSeconds} seconds`;
|
||||
} else {
|
||||
const elapsedMinutes = Math.floor(elapsedSeconds / 60);
|
||||
timeString = `${elapsedMinutes} minutes`;
|
||||
}
|
||||
|
||||
await updateMessage(context.messageId, {
|
||||
finalReasoningMessage: `Reasoned for ${timeString}`,
|
||||
});
|
||||
|
||||
// Update chat's most_recent fields with the first asset that was returned
|
||||
if (doneToolState.addedAssets && doneToolState.addedAssets.length > 0 && context.chatId) {
|
||||
try {
|
||||
const firstAsset = doneToolState.addedAssets[0];
|
||||
|
||||
if (firstAsset) {
|
||||
// Get the actual version number from the database
|
||||
const versionNumber = await getAssetLatestVersion({
|
||||
assetId: firstAsset.assetId,
|
||||
assetType: firstAsset.assetType,
|
||||
});
|
||||
|
||||
await updateChat(context.chatId, {
|
||||
mostRecentFileId: firstAsset.assetId,
|
||||
mostRecentFileType: firstAsset.assetType,
|
||||
mostRecentVersionNumber: versionNumber,
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('[done-tool] Failed to update chat most_recent fields:', error);
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('[done-tool] Failed to set final reasoning message in delta:', error);
|
||||
}
|
||||
}
|
||||
|
||||
// Update the state with the extracted final_response
|
||||
doneToolState.finalResponse = finalResponse;
|
||||
|
||||
|
|
|
@ -19,6 +19,7 @@ export function createDoneToolFinish(context: DoneToolContext, doneToolState: Do
|
|||
messageId: context.messageId,
|
||||
};
|
||||
|
||||
// Only add the final text response here; by now files have been inserted via delta
|
||||
if (doneToolResponseEntry) {
|
||||
entries.responseMessages = [doneToolResponseEntry];
|
||||
}
|
||||
|
|
|
@ -7,12 +7,14 @@ import { CREATE_METRICS_TOOL_NAME } from '../../visualization-tools/metrics/crea
|
|||
import { CREATE_REPORTS_TOOL_NAME } from '../../visualization-tools/reports/create-reports-tool/create-reports-tool';
|
||||
import { MODIFY_REPORTS_TOOL_NAME } from '../../visualization-tools/reports/modify-reports-tool/modify-reports-tool';
|
||||
import type { DoneToolContext, DoneToolState } from './done-tool';
|
||||
import { createDoneToolDelta } from './done-tool-delta';
|
||||
import { createDoneToolStart } from './done-tool-start';
|
||||
|
||||
vi.mock('@buster/database/queries', () => ({
|
||||
updateChat: vi.fn(),
|
||||
updateMessage: vi.fn(),
|
||||
updateMessageEntries: vi.fn(),
|
||||
getAssetLatestVersion: vi.fn().mockResolvedValue(1),
|
||||
}));
|
||||
|
||||
describe('done-tool-start', () => {
|
||||
|
@ -121,11 +123,31 @@ describe('done-tool-start', () => {
|
|||
];
|
||||
|
||||
const doneToolStart = createDoneToolStart(mockContext, mockDoneToolState);
|
||||
const doneToolDelta = createDoneToolDelta(mockContext, mockDoneToolState);
|
||||
|
||||
// Start phase - initializes state
|
||||
await doneToolStart({
|
||||
toolCallId: 'done-call',
|
||||
messages: mockMessages,
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Delta phase - streams in the assets and final response
|
||||
const deltaInput = JSON.stringify({
|
||||
assetsToReturn: [
|
||||
{
|
||||
assetId: reportId,
|
||||
assetName: 'Quarterly Report',
|
||||
assetType: 'report_file',
|
||||
},
|
||||
],
|
||||
finalResponse: 'Report created successfully',
|
||||
});
|
||||
|
||||
await doneToolDelta({
|
||||
inputTextDelta: deltaInput,
|
||||
toolCallId: 'done-call',
|
||||
} as ToolCallOptions);
|
||||
|
||||
expect(updateChat).toHaveBeenCalledWith('chat-123', {
|
||||
mostRecentFileId: reportId,
|
||||
mostRecentFileType: 'report_file',
|
||||
|
@ -208,11 +230,30 @@ describe('done-tool-start', () => {
|
|||
];
|
||||
|
||||
const doneToolStart = createDoneToolStart(mockContext, mockDoneToolState);
|
||||
const doneToolDelta = createDoneToolDelta(mockContext, mockDoneToolState);
|
||||
|
||||
await doneToolStart({
|
||||
toolCallId: 'done-call',
|
||||
messages: mockMessages,
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Delta phase - stream in the first metric as the asset to return
|
||||
const deltaInput = JSON.stringify({
|
||||
assetsToReturn: [
|
||||
{
|
||||
assetId: metricId1,
|
||||
assetName: 'Revenue Growth',
|
||||
assetType: 'metric_file',
|
||||
},
|
||||
],
|
||||
finalResponse: 'Metrics created successfully',
|
||||
});
|
||||
|
||||
await doneToolDelta({
|
||||
inputTextDelta: deltaInput,
|
||||
toolCallId: 'done-call',
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Should select the first metric (first in extractedFiles)
|
||||
expect(updateChat).toHaveBeenCalledWith('chat-123', {
|
||||
mostRecentFileId: metricId1,
|
||||
|
@ -300,11 +341,35 @@ describe('done-tool-start', () => {
|
|||
];
|
||||
|
||||
const doneToolStart = createDoneToolStart(mockContext, mockDoneToolState);
|
||||
const doneToolDelta = createDoneToolDelta(mockContext, mockDoneToolState);
|
||||
|
||||
await doneToolStart({
|
||||
toolCallId: 'done-call',
|
||||
messages: mockMessages,
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Delta phase - stream in the report and standalone metric as assets to return
|
||||
const deltaInput = JSON.stringify({
|
||||
assetsToReturn: [
|
||||
{
|
||||
assetId: reportId,
|
||||
assetName: 'Monthly Report',
|
||||
assetType: 'report_file',
|
||||
},
|
||||
{
|
||||
assetId: standaloneMetricId,
|
||||
assetName: 'Standalone Metric',
|
||||
assetType: 'metric_file',
|
||||
},
|
||||
],
|
||||
finalResponse: 'Report created with embedded metrics',
|
||||
});
|
||||
|
||||
await doneToolDelta({
|
||||
inputTextDelta: deltaInput,
|
||||
toolCallId: 'done-call',
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Report should be selected as mostRecentFile
|
||||
expect(updateChat).toHaveBeenCalledWith('chat-123', {
|
||||
mostRecentFileId: reportId,
|
||||
|
@ -376,11 +441,30 @@ describe('done-tool-start', () => {
|
|||
];
|
||||
|
||||
const doneToolStart = createDoneToolStart(mockContext, mockDoneToolState);
|
||||
const doneToolDelta = createDoneToolDelta(mockContext, mockDoneToolState);
|
||||
|
||||
await doneToolStart({
|
||||
toolCallId: 'done-call',
|
||||
messages: mockMessages,
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Delta phase - stream in the first metric
|
||||
const deltaInput = JSON.stringify({
|
||||
assetsToReturn: [
|
||||
{
|
||||
assetId: metricId1,
|
||||
assetName: 'Revenue Metric',
|
||||
assetType: 'metric_file',
|
||||
},
|
||||
],
|
||||
finalResponse: 'Multiple metrics created',
|
||||
});
|
||||
|
||||
await doneToolDelta({
|
||||
inputTextDelta: deltaInput,
|
||||
toolCallId: 'done-call',
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Should select the first metric
|
||||
expect(updateChat).toHaveBeenCalledWith('chat-123', {
|
||||
mostRecentFileId: metricId1,
|
||||
|
@ -424,11 +508,30 @@ describe('done-tool-start', () => {
|
|||
];
|
||||
|
||||
const doneToolStart = createDoneToolStart(mockContext, mockDoneToolState);
|
||||
const doneToolDelta = createDoneToolDelta(mockContext, mockDoneToolState);
|
||||
|
||||
await doneToolStart({
|
||||
toolCallId: 'done-call',
|
||||
messages: mockMessages,
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Delta phase - stream in the first dashboard
|
||||
const deltaInput = JSON.stringify({
|
||||
assetsToReturn: [
|
||||
{
|
||||
assetId: dashboardId1,
|
||||
assetName: 'Main Dashboard',
|
||||
assetType: 'dashboard_file',
|
||||
},
|
||||
],
|
||||
finalResponse: 'Multiple dashboards created',
|
||||
});
|
||||
|
||||
await doneToolDelta({
|
||||
inputTextDelta: deltaInput,
|
||||
toolCallId: 'done-call',
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Should select the first dashboard
|
||||
expect(updateChat).toHaveBeenCalledWith('chat-123', {
|
||||
mostRecentFileId: dashboardId1,
|
||||
|
@ -491,11 +594,30 @@ describe('done-tool-start', () => {
|
|||
];
|
||||
|
||||
const doneToolStart = createDoneToolStart(mockContext, mockDoneToolState);
|
||||
const doneToolDelta = createDoneToolDelta(mockContext, mockDoneToolState);
|
||||
|
||||
await doneToolStart({
|
||||
toolCallId: 'done-call',
|
||||
messages: mockMessages,
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Delta phase - stream in the dashboard
|
||||
const deltaInput = JSON.stringify({
|
||||
assetsToReturn: [
|
||||
{
|
||||
assetId: dashboardId,
|
||||
assetName: 'Analytics Dashboard',
|
||||
assetType: 'dashboard_file',
|
||||
},
|
||||
],
|
||||
finalResponse: 'Dashboard and metrics created',
|
||||
});
|
||||
|
||||
await doneToolDelta({
|
||||
inputTextDelta: deltaInput,
|
||||
toolCallId: 'done-call',
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Should select the dashboard (first in extractedFiles)
|
||||
expect(updateChat).toHaveBeenCalledWith('chat-123', {
|
||||
mostRecentFileId: dashboardId,
|
||||
|
@ -557,11 +679,30 @@ describe('done-tool-start', () => {
|
|||
];
|
||||
|
||||
const doneToolStart = createDoneToolStart(mockContext, mockDoneToolState);
|
||||
const doneToolDelta = createDoneToolDelta(mockContext, mockDoneToolState);
|
||||
|
||||
await doneToolStart({
|
||||
toolCallId: 'done-call',
|
||||
messages: mockMessages,
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Delta phase - stream in the report
|
||||
const deltaInput = JSON.stringify({
|
||||
assetsToReturn: [
|
||||
{
|
||||
assetId: reportId,
|
||||
assetName: 'Analysis Report',
|
||||
assetType: 'report_file',
|
||||
},
|
||||
],
|
||||
finalResponse: 'Report created',
|
||||
});
|
||||
|
||||
await doneToolDelta({
|
||||
inputTextDelta: deltaInput,
|
||||
toolCallId: 'done-call',
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Report should still be selected as mostRecentFile
|
||||
expect(updateChat).toHaveBeenCalledWith('chat-123', {
|
||||
mostRecentFileId: reportId,
|
||||
|
@ -629,15 +770,34 @@ describe('done-tool-start', () => {
|
|||
];
|
||||
|
||||
const doneToolStart = createDoneToolStart(mockContext, mockDoneToolState);
|
||||
const doneToolDelta = createDoneToolDelta(mockContext, mockDoneToolState);
|
||||
|
||||
await doneToolStart({
|
||||
toolCallId: 'done-call',
|
||||
messages: mockMessages,
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Should select the standalone metric (first in extractedFiles after filtering)
|
||||
// Delta phase - stream in the dashboard (metrics are embedded)
|
||||
const deltaInput = JSON.stringify({
|
||||
assetsToReturn: [
|
||||
{
|
||||
assetId: dashboardId,
|
||||
assetName: 'Main Dashboard',
|
||||
assetType: 'dashboard_file',
|
||||
},
|
||||
],
|
||||
finalResponse: 'Dashboard with metrics created',
|
||||
});
|
||||
|
||||
await doneToolDelta({
|
||||
inputTextDelta: deltaInput,
|
||||
toolCallId: 'done-call',
|
||||
} as ToolCallOptions);
|
||||
|
||||
// Should select the dashboard since that's what we're returning
|
||||
expect(updateChat).toHaveBeenCalledWith('chat-123', {
|
||||
mostRecentFileId: standaloneMetricId,
|
||||
mostRecentFileType: 'metric_file',
|
||||
mostRecentFileId: dashboardId,
|
||||
mostRecentFileType: 'dashboard_file',
|
||||
mostRecentVersionNumber: 1,
|
||||
});
|
||||
});
|
||||
|
|
|
@ -3,12 +3,13 @@ import type { ToolCallOptions } from 'ai';
|
|||
import type { UpdateMessageEntriesParams } from '../../../../../database/src/queries/messages/update-message-entries';
|
||||
import { createRawToolResultEntry } from '../../shared/create-raw-llm-tool-result-entry';
|
||||
import { DONE_TOOL_NAME, type DoneToolContext, type DoneToolState } from './done-tool';
|
||||
import {
|
||||
type ExtractedFile,
|
||||
createFileResponseMessages,
|
||||
extractAllFilesForChatUpdate,
|
||||
extractFilesFromToolCalls,
|
||||
} from './helpers/done-tool-file-selection';
|
||||
// Selection logic moved to delta for optimistic insertion; keeping types but disabling extraction
|
||||
// import {
|
||||
// type ExtractedFile,
|
||||
// createFileResponseMessages,
|
||||
// extractAllFilesForChatUpdate,
|
||||
// extractFilesFromToolCalls,
|
||||
// } from './helpers/done-tool-file-selection';
|
||||
import {
|
||||
createDoneToolRawLlmMessageEntry,
|
||||
createDoneToolResponseMessage,
|
||||
|
@ -21,98 +22,21 @@ export function createDoneToolStart(context: DoneToolContext, doneToolState: Don
|
|||
doneToolState.toolCallId = options.toolCallId;
|
||||
doneToolState.args = undefined;
|
||||
doneToolState.finalResponse = undefined;
|
||||
doneToolState.addedAssetIds = [];
|
||||
doneToolState.addedAssets = [];
|
||||
|
||||
// Extract files from the tool call responses in messages
|
||||
// Selection logic moved to delta; skip extracting files here
|
||||
if (options.messages) {
|
||||
console.info('[done-tool-start] Extracting files from messages', {
|
||||
messageCount: options.messages?.length,
|
||||
toolCallId: options.toolCallId,
|
||||
});
|
||||
|
||||
// Extract files for response messages (filtered to avoid duplicates)
|
||||
const extractedFiles = extractFilesFromToolCalls(options.messages);
|
||||
|
||||
// Extract ALL files for updating the chat's most recent file (includes reports)
|
||||
const allFilesForChatUpdate = extractAllFilesForChatUpdate(options.messages);
|
||||
|
||||
console.info('[done-tool-start] Files extracted', {
|
||||
filesForResponseMessages: extractedFiles.length,
|
||||
responseFiles: extractedFiles.map((f) => ({
|
||||
id: f.id,
|
||||
type: f.fileType,
|
||||
name: f.fileName,
|
||||
})),
|
||||
allFilesCreated: allFilesForChatUpdate.length,
|
||||
allFiles: allFilesForChatUpdate.map((f) => ({
|
||||
id: f.id,
|
||||
type: f.fileType,
|
||||
name: f.fileName,
|
||||
})),
|
||||
});
|
||||
|
||||
// Add extracted files as response messages (these are filtered to avoid duplicates)
|
||||
if (extractedFiles.length > 0 && context.messageId) {
|
||||
const fileResponses = createFileResponseMessages(extractedFiles);
|
||||
|
||||
console.info('[done-tool-start] Creating file response messages', {
|
||||
responseCount: fileResponses.length,
|
||||
});
|
||||
|
||||
// Add all files as response entries to the database in a single batch
|
||||
try {
|
||||
await updateMessageEntries({
|
||||
messageId: context.messageId,
|
||||
responseMessages: fileResponses,
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('[done-tool] Failed to add file response entries:', error);
|
||||
console.info(
|
||||
'[done-tool-start] Skipping file selection; handled in delta for optimistic insertion',
|
||||
{
|
||||
messageCount: options.messages?.length,
|
||||
toolCallId: options.toolCallId,
|
||||
}
|
||||
}
|
||||
|
||||
// Update the chat with the most recent file
|
||||
// Priority: Reports (already in responses) > First extracted file > Any file
|
||||
if (context.chatId && allFilesForChatUpdate.length > 0) {
|
||||
let mostRecentFile: ExtractedFile | undefined;
|
||||
|
||||
// Priority 1: Report files (they're already in response messages)
|
||||
const reportFile = allFilesForChatUpdate.find((f) => f.fileType === 'report_file');
|
||||
if (reportFile) {
|
||||
mostRecentFile = reportFile;
|
||||
}
|
||||
// Priority 2: First file from extractedFiles (metrics/dashboards being added as responses)
|
||||
else if (extractedFiles.length > 0) {
|
||||
mostRecentFile = extractedFiles[0];
|
||||
}
|
||||
// Priority 3: Fallback to any file from allFilesForChatUpdate
|
||||
else {
|
||||
mostRecentFile = allFilesForChatUpdate[0];
|
||||
}
|
||||
|
||||
if (mostRecentFile) {
|
||||
console.info('[done-tool-start] Updating chat with most recent file', {
|
||||
chatId: context.chatId,
|
||||
fileId: mostRecentFile.id,
|
||||
fileType: mostRecentFile.fileType,
|
||||
fileName: mostRecentFile.fileName,
|
||||
versionNumber: mostRecentFile.versionNumber,
|
||||
wasFromExtracted: extractedFiles.some((f) => f.id === mostRecentFile.id),
|
||||
wasReport: mostRecentFile.fileType === 'report_file',
|
||||
});
|
||||
|
||||
try {
|
||||
await updateChat(context.chatId, {
|
||||
mostRecentFileId: mostRecentFile.id,
|
||||
mostRecentFileType: mostRecentFile.fileType,
|
||||
mostRecentVersionNumber: mostRecentFile.versionNumber || 1,
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('[done-tool] Failed to update chat with most recent file:', error);
|
||||
}
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
const doneToolResponseEntry = createDoneToolResponseMessage(doneToolState, options.toolCallId);
|
||||
// Do not create the done text response here; wait until assets are inserted via delta
|
||||
const doneToolMessage = createDoneToolRawLlmMessageEntry(doneToolState, options.toolCallId);
|
||||
|
||||
// Create the tool result immediately with success: true
|
||||
|
@ -125,9 +49,8 @@ export function createDoneToolStart(context: DoneToolContext, doneToolState: Don
|
|||
messageId: context.messageId,
|
||||
};
|
||||
|
||||
if (doneToolResponseEntry) {
|
||||
entries.responseMessages = [doneToolResponseEntry];
|
||||
}
|
||||
// Intentionally skip adding responseMessages here to ensure file messages (from delta)
|
||||
// are inserted before the final text message
|
||||
|
||||
// Include both the tool call and tool result in raw LLM messages
|
||||
// Since it's an upsert, sending both together ensures completeness
|
||||
|
@ -145,25 +68,6 @@ export function createDoneToolStart(context: DoneToolContext, doneToolState: Don
|
|||
if (entries.responseMessages || entries.rawLlmMessages) {
|
||||
await updateMessageEntries(entries);
|
||||
}
|
||||
|
||||
// Add final reasoning message with workflow time (but don't mark as completed yet - that happens in execute)
|
||||
if (context.messageId) {
|
||||
const currentTime = Date.now();
|
||||
const elapsedTimeMs = currentTime - context.workflowStartTime;
|
||||
const elapsedSeconds = Math.floor(elapsedTimeMs / 1000);
|
||||
|
||||
let timeString: string;
|
||||
if (elapsedSeconds < 60) {
|
||||
timeString = `${elapsedSeconds} seconds`;
|
||||
} else {
|
||||
const elapsedMinutes = Math.floor(elapsedSeconds / 60);
|
||||
timeString = `${elapsedMinutes} minutes`;
|
||||
}
|
||||
|
||||
await updateMessage(context.messageId, {
|
||||
finalReasoningMessage: `Reasoned for ${timeString}`,
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('[done-tool] Failed to update done tool raw LLM message:', error);
|
||||
}
|
||||
|
|
|
@ -12,6 +12,7 @@ vi.mock('@buster/database/queries', () => ({
|
|||
updateMessageEntries: vi.fn().mockResolvedValue({ success: true }),
|
||||
updateMessage: vi.fn().mockResolvedValue({ success: true }),
|
||||
updateChat: vi.fn().mockResolvedValue({ success: true }),
|
||||
getAssetLatestVersion: vi.fn().mockResolvedValue(1),
|
||||
}));
|
||||
|
||||
describe('Done Tool Streaming Tests', () => {
|
||||
|
@ -143,9 +144,12 @@ describe('Done Tool Streaming Tests', () => {
|
|||
toolCallId: undefined,
|
||||
args: undefined,
|
||||
finalResponse: undefined,
|
||||
addedAssetIds: [],
|
||||
addedAssets: [],
|
||||
};
|
||||
|
||||
const startHandler = createDoneToolStart(mockContext, state);
|
||||
const deltaHandler = createDoneToolDelta(mockContext, state);
|
||||
|
||||
const reportId = 'report-1';
|
||||
const messages: ModelMessage[] = [
|
||||
|
@ -186,6 +190,22 @@ describe('Done Tool Streaming Tests', () => {
|
|||
|
||||
await startHandler({ toolCallId: 'call-1', messages });
|
||||
|
||||
// Now call delta with the asset data and final response
|
||||
const deltaInput = JSON.stringify({
|
||||
assetsToReturn: [
|
||||
{
|
||||
assetId: reportId,
|
||||
assetName: 'Quarterly Report',
|
||||
assetType: 'report_file',
|
||||
},
|
||||
],
|
||||
finalResponse: 'Report created successfully',
|
||||
});
|
||||
await deltaHandler({
|
||||
inputTextDelta: deltaInput,
|
||||
toolCallId: 'call-1',
|
||||
} as ToolCallOptions);
|
||||
|
||||
const queries = await import('@buster/database/queries');
|
||||
|
||||
// mostRecent should be set to the report
|
||||
|
@ -218,9 +238,12 @@ describe('Done Tool Streaming Tests', () => {
|
|||
toolCallId: undefined,
|
||||
args: undefined,
|
||||
finalResponse: undefined,
|
||||
addedAssetIds: [],
|
||||
addedAssets: [],
|
||||
};
|
||||
|
||||
const startHandler = createDoneToolStart(mockContext, state);
|
||||
const deltaHandler = createDoneToolDelta(mockContext, state);
|
||||
|
||||
const reportId = 'report-2';
|
||||
const metricId = 'metric-1';
|
||||
|
@ -285,9 +308,30 @@ describe('Done Tool Streaming Tests', () => {
|
|||
|
||||
await startHandler({ toolCallId: 'call-2', messages });
|
||||
|
||||
// Now call delta with the asset data and final response
|
||||
const deltaInput = JSON.stringify({
|
||||
assetsToReturn: [
|
||||
{
|
||||
assetId: reportId,
|
||||
assetName: 'Key Metrics Report',
|
||||
assetType: 'report_file',
|
||||
},
|
||||
{
|
||||
assetId: metricId,
|
||||
assetName: 'Revenue',
|
||||
assetType: 'metric_file',
|
||||
},
|
||||
],
|
||||
finalResponse: 'Report and metrics created successfully',
|
||||
});
|
||||
await deltaHandler({
|
||||
inputTextDelta: deltaInput,
|
||||
toolCallId: 'call-2',
|
||||
} as ToolCallOptions);
|
||||
|
||||
const queries = await import('@buster/database/queries');
|
||||
|
||||
// mostRecent should prefer the report
|
||||
// mostRecent should prefer the report (first asset returned)
|
||||
const updateArgs = ((queries.updateChat as unknown as { mock: { calls: unknown[][] } }).mock
|
||||
.calls?.[0]?.[1] || {}) as Record<string, unknown>;
|
||||
expect(updateArgs).toMatchObject({
|
||||
|
@ -295,7 +339,7 @@ describe('Done Tool Streaming Tests', () => {
|
|||
mostRecentFileType: 'report_file',
|
||||
});
|
||||
|
||||
// Response messages should include the metric file
|
||||
// Response messages should include both files
|
||||
const fileResponseCall = (
|
||||
queries.updateMessageEntries as unknown as { mock: { calls: [Record<string, any>][] } }
|
||||
).mock.calls.find(
|
||||
|
@ -322,9 +366,12 @@ describe('Done Tool Streaming Tests', () => {
|
|||
toolCallId: undefined,
|
||||
args: undefined,
|
||||
finalResponse: undefined,
|
||||
addedAssetIds: [],
|
||||
addedAssets: [],
|
||||
};
|
||||
|
||||
const startHandler = createDoneToolStart(mockContext, state);
|
||||
const deltaHandler = createDoneToolDelta(mockContext, state);
|
||||
|
||||
const dashboardId = 'dash-1';
|
||||
const metricId = 'metric-2';
|
||||
|
@ -378,6 +425,27 @@ describe('Done Tool Streaming Tests', () => {
|
|||
|
||||
await startHandler({ toolCallId: 'call-3', messages });
|
||||
|
||||
// Now call delta with the asset data and final response
|
||||
const deltaInput = JSON.stringify({
|
||||
assetsToReturn: [
|
||||
{
|
||||
assetId: dashboardId,
|
||||
assetName: 'Sales Dashboard',
|
||||
assetType: 'dashboard_file',
|
||||
},
|
||||
{
|
||||
assetId: metricId,
|
||||
assetName: 'Margin',
|
||||
assetType: 'metric_file',
|
||||
},
|
||||
],
|
||||
finalResponse: 'Dashboard and metrics created successfully',
|
||||
});
|
||||
await deltaHandler({
|
||||
inputTextDelta: deltaInput,
|
||||
toolCallId: 'call-3',
|
||||
} as ToolCallOptions);
|
||||
|
||||
const queries = await import('@buster/database/queries');
|
||||
const updateArgs = ((queries.updateChat as unknown as { mock: { calls: unknown[][] } }).mock
|
||||
.calls[0]?.[1] || {}) as Record<string, unknown>;
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
import { AssetTypeSchema } from '@buster/server-shared';
|
||||
import { tool } from 'ai';
|
||||
import { z } from 'zod';
|
||||
import { createDoneToolDelta } from './done-tool-delta';
|
||||
|
@ -8,6 +9,17 @@ import { createDoneToolStart } from './done-tool-start';
|
|||
export const DONE_TOOL_NAME = 'doneTool';
|
||||
|
||||
export const DoneToolInputSchema = z.object({
|
||||
assetsToReturn: z
|
||||
.array(
|
||||
z.object({
|
||||
assetId: z.string().uuid(),
|
||||
assetName: z.string(),
|
||||
assetType: AssetTypeSchema,
|
||||
})
|
||||
)
|
||||
.describe(
|
||||
'This should always be the first argument returned by the done tool. This should be the top-level asset that the user is trying to work with. Metrics, when involved in dashboards and reports, should always be bundled into their respective top-level assets. If a user asks to modify a metric on a dashboard or report then the dashboard or report should be returned here. A good rule of thumb is if any dashboard or report exists in the chat and a metric is part of it, the metric should not be returned.'
|
||||
),
|
||||
finalResponse: z
|
||||
.string()
|
||||
.min(1, 'Final response is required')
|
||||
|
@ -40,6 +52,25 @@ const DoneToolStateSchema = z.object({
|
|||
.describe(
|
||||
'The final response message to the user. This is optional and will be set by the tool delta and finish'
|
||||
),
|
||||
addedAssetIds: z
|
||||
.array(z.string())
|
||||
.optional()
|
||||
.describe('Asset IDs that have already been inserted as response messages to avoid duplicates'),
|
||||
addedAssets: z
|
||||
.array(
|
||||
z.object({
|
||||
assetId: z.string(),
|
||||
assetType: z.enum([
|
||||
'metric_file',
|
||||
'dashboard_file',
|
||||
'report_file',
|
||||
'analyst_chat',
|
||||
'collection',
|
||||
]),
|
||||
})
|
||||
)
|
||||
.optional()
|
||||
.describe('Assets that have been added with their types for chat update'),
|
||||
});
|
||||
|
||||
export type DoneToolInput = z.infer<typeof DoneToolInputSchema>;
|
||||
|
@ -52,6 +83,8 @@ export function createDoneTool(context: DoneToolContext) {
|
|||
toolCallId: undefined,
|
||||
args: undefined,
|
||||
finalResponse: undefined,
|
||||
addedAssetIds: [],
|
||||
addedAssets: [],
|
||||
};
|
||||
|
||||
const execute = createDoneToolExecute(context, state);
|
||||
|
|
|
@ -0,0 +1,75 @@
|
|||
import { and, eq, isNull } from 'drizzle-orm';
|
||||
import { z } from 'zod';
|
||||
import { db } from '../../connection';
|
||||
import { dashboardFiles, metricFiles, reportFiles } from '../../schema';
|
||||
import { type ResponseMessageFileType, ResponseMessageFileTypeSchema } from '../../schema-types';
|
||||
|
||||
export const GetAssetLatestVersionInputSchema = z.object({
|
||||
assetId: z.string().uuid().describe('Asset ID to get version for'),
|
||||
assetType: ResponseMessageFileTypeSchema.describe('Type of asset'),
|
||||
});
|
||||
|
||||
export type GetAssetLatestVersionInput = z.infer<typeof GetAssetLatestVersionInputSchema>;
|
||||
|
||||
/**
|
||||
* Get the latest version number for an asset
|
||||
* Extracts the maximum version from the versionHistory JSON field
|
||||
*/
|
||||
export async function getAssetLatestVersion(input: GetAssetLatestVersionInput): Promise<number> {
|
||||
const validated = GetAssetLatestVersionInputSchema.parse(input);
|
||||
const { assetId, assetType } = validated;
|
||||
|
||||
if (assetType === 'metric_file') {
|
||||
const [metric] = await db
|
||||
.select({ versionHistory: metricFiles.versionHistory })
|
||||
.from(metricFiles)
|
||||
.where(and(eq(metricFiles.id, assetId), isNull(metricFiles.deletedAt)))
|
||||
.limit(1);
|
||||
|
||||
if (!metric) {
|
||||
throw new Error(`Metric file not found: ${assetId}`);
|
||||
}
|
||||
|
||||
const versions = Object.keys(metric.versionHistory || {})
|
||||
.map(Number)
|
||||
.filter((n) => !Number.isNaN(n));
|
||||
return versions.length > 0 ? Math.max(...versions) : 1;
|
||||
}
|
||||
|
||||
if (assetType === 'dashboard_file') {
|
||||
const [dashboard] = await db
|
||||
.select({ versionHistory: dashboardFiles.versionHistory })
|
||||
.from(dashboardFiles)
|
||||
.where(and(eq(dashboardFiles.id, assetId), isNull(dashboardFiles.deletedAt)))
|
||||
.limit(1);
|
||||
|
||||
if (!dashboard) {
|
||||
throw new Error(`Dashboard file not found: ${assetId}`);
|
||||
}
|
||||
|
||||
const versions = Object.keys(dashboard.versionHistory || {})
|
||||
.map(Number)
|
||||
.filter((n) => !Number.isNaN(n));
|
||||
return versions.length > 0 ? Math.max(...versions) : 1;
|
||||
}
|
||||
|
||||
if (assetType === 'report_file') {
|
||||
const [report] = await db
|
||||
.select({ versionHistory: reportFiles.versionHistory })
|
||||
.from(reportFiles)
|
||||
.where(and(eq(reportFiles.id, assetId), isNull(reportFiles.deletedAt)))
|
||||
.limit(1);
|
||||
|
||||
if (!report) {
|
||||
throw new Error(`Report file not found: ${assetId}`);
|
||||
}
|
||||
|
||||
const versions = Object.keys(report.versionHistory || {})
|
||||
.map(Number)
|
||||
.filter((n) => !Number.isNaN(n));
|
||||
return versions.length > 0 ? Math.max(...versions) : 1;
|
||||
}
|
||||
|
||||
// For other asset types that might not have version history yet
|
||||
return 1;
|
||||
}
|
|
@ -23,3 +23,9 @@ export {
|
|||
getMetricDashboardAncestors,
|
||||
getMetricReportAncestors,
|
||||
} from './asset-ancestors';
|
||||
|
||||
export {
|
||||
getAssetLatestVersion,
|
||||
GetAssetLatestVersionInputSchema,
|
||||
type GetAssetLatestVersionInput,
|
||||
} from './get-asset-latest-version';
|
||||
|
|
|
@ -46,6 +46,8 @@ const ResponseMessage_FileSchema = z.object({
|
|||
metadata: z.array(ResponseMessage_FileMetadataSchema).optional(),
|
||||
});
|
||||
|
||||
export { ResponseMessageFileTypeSchema };
|
||||
|
||||
export const ResponseMessageSchema = z.discriminatedUnion('type', [
|
||||
ResponseMessage_TextSchema,
|
||||
ResponseMessage_FileSchema,
|
||||
|
|
Loading…
Reference in New Issue