- Add mermaid dependency to package.json
- Implement MermaidRenderer component with performance optimizations:
* Global initialization and caching system
* Lazy loading with Intersection Observer
* Memory management with automatic cache cleanup
* Graceful fallback to syntax-highlighted code on render failure
- Integrate MermaidRenderer into CodeBlockCode component
- Refactor code-block.tsx for better separation of concerns
- Support both Mermaid diagram rendering and code display modes
The implementation provides a seamless user experience where:
- Valid Mermaid syntax renders as interactive diagrams
- Invalid syntax gracefully falls back to highlighted code
- Performance is optimized through caching and lazy loading
- Memory usage is controlled through intelligent cache management
- Add OpenAI-compatible API support with custom endpoints
- Implement LiteLLM Router for multi-provider routing
- Add new config options: OPENAI_COMPATIBLE_API_KEY/BASE
- Update environment examples and self-hosting documentation
Enables support for local LLM services like Ollama, LM Studio, vLLM,
and third-party OpenAI-compatible providers.
- Restore emailRedirectTo in signup to preserve returnUrl functionality
- Fix callback route to use NEXT_PUBLIC_URL instead of parsed origin to avoid 0.0.0.0 issues in self-hosted environments
- Correct parameter name from 'next' back to 'returnUrl' to match actual usage throughout the codebase
The returnUrl parameter was being ignored because callback was reading 'next' parameter instead of 'returnUrl' that's actually passed by signup and OAuth flows.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>