- Added new API routes for fetching dataset samples by ID, including validation and error handling.
- Implemented `getDatasetSampleHandler` to manage dataset access and sample query execution.
- Introduced `executeSampleQuery` utility for executing read-only SQL queries with retry logic.
- Created new schemas for dataset sample request and response types.
- Updated existing dataset access control logic to ensure proper permissions are enforced.
- Added tests for the new dataset sample functionality to ensure reliability.
- Added `normalizeRowValues` function to BigQuery, MySQL, PostgreSQL, Redshift, Snowflake, and SQLServer adapters to ensure consistent data types across different databases.
- Updated row processing logic in each adapter to apply normalization when converting query results.
- Simplified S3 integration handler functions by consolidating imports and improving code readability.
- Updated error handling in the S3 integration process to enhance clarity and maintainability.
- Refactored storage provider functions to utilize a more modular approach, separating concerns for better organization.
- Introduced utility functions for common operations, improving code reuse and reducing duplication.
These changes enhance the overall structure and maintainability of the S3 integration management features.
- Added routes for creating, retrieving, and deleting S3 integrations in the API.
- Introduced handlers for S3 integration operations, including validation of user permissions and storage credentials.
- Updated database schema to support S3 integrations, including a new table and associated queries.
- Integrated storage provider logic to handle S3, R2, and GCS configurations.
- Enhanced error handling and response structures for integration operations.
This commit lays the groundwork for managing storage integrations within the application, allowing users to connect and manage their S3 storage solutions.
- Modified the SSL configuration in both the PostgreSQL adapter and its tests to use { rejectUnauthorized: false } instead of a boolean true value.
- Ensured consistency in handling SSL settings across the adapter and its tests.
- Updated all database adapter tests and implementations to replace the 'database' field with 'default_database' for consistency.
- Ensured backward compatibility in the Redshift adapter by allowing both 'database' and 'default_database' fields.
- Enhanced SQLServer and MySQL adapters to reflect the new credential structure, improving clarity and maintainability.
- Upgraded '@aws-sdk/client-s3' to version 3.873.0 across multiple packages.
- Introduced caching mechanisms for metric data retrieval in the getMetricDataHandler function.
- Updated API endpoints to support report_file_id for cache lookups and data retrieval.
- Enhanced error handling and logging for cache operations.
- Refactored related components to accommodate new caching logic and parameters.
- Introduced `executeMetricQuery` utility for standardized metric SQL query execution with retry logic.
- Updated `getMetricDataHandler` and metric tool execution functions to utilize the new query utility, improving error handling and result processing.
- Added metadata generation from query results to provide detailed insights into data structure.
- Refactored SQL validation to ensure only read-only queries are executed, enhancing data integrity.
The Snowflake adapter implementation transforms column names to lowercase
for consistency, but the tests were expecting uppercase column names.
This commit updates the tests to match the implementation:
- Update test expectations to use lowercase column names (id, name)
- Fix hasMoreRows assertions to match implementation logic (only true when rowCount > limit)
- Ensure all Snowflake-related tests pass with the current adapter behavior
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Increased allowed variance in cached query time checks to accommodate network fluctuations.
- Corrected property name in test assertions to match expected lowercase format.
- Enhanced SnowflakeAdapter to transform column names to lowercase and adjusted logic for determining if more rows are available from the stream.
- Update query() method to use streamResult: true and stmt.streamRows()
- Add network-level row limiting with default 5000 row cap
- Process stream events (data, error, end) to build result set
- Maintain backward compatibility with existing adapter interface
- Update unit tests to mock streaming behavior
- Fix integration test imports and property names
- Preserve query caching by using original SQL unchanged
Co-Authored-By: Dallin Bentley <dallinbentley98@gmail.com>
- Fixed AI package unit tests that were previously failing
- Updated database package.json with environment variables for tests
- Fixed snowflake adapter test issues in data-source package
Co-Authored-By: Dallin Bentley <dallinbentley98@gmail.com>