mirror of https://github.com/kortix-ai/suna.git
docs: enhance contributing and self-hosting documentation
- Added quick setup instructions and detailed setup steps in CONTRIBUTING.md and SELF-HOSTING.md. - Updated environment variable configurations and added new required services for setup. - Improved clarity on the setup wizard's functionality and progress saving. - Revised README files for both backend and frontend to include quick setup instructions and environment configurations. - Updated model references to the latest version of the Anthropic model across various files. - Removed deprecated workflow background script.
This commit is contained in:
parent
beeabc5940
commit
3a49f9591b
|
@ -48,7 +48,7 @@ jobs:
|
|||
context: ./backend
|
||||
file: ./backend/Dockerfile
|
||||
push: true
|
||||
platforms: linux/arm64, linux/amd64
|
||||
platforms: linux/amd64
|
||||
tags: ghcr.io/${{ github.repository }}/suna-backend:${{ steps.get_tag_name.outputs.branch }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
|
@ -64,7 +64,6 @@ jobs:
|
|||
cd /home/suna/backend
|
||||
git pull
|
||||
docker compose build
|
||||
docker compose restart redis
|
||||
docker compose up -d
|
||||
|
||||
- name: Deploy to prod
|
||||
|
@ -78,5 +77,4 @@ jobs:
|
|||
cd /home/suna/backend
|
||||
git pull
|
||||
docker compose -f docker-compose.yml -f docker-compose.prod.yml build
|
||||
docker compose -f docker-compose.yml -f docker-compose.prod.yml restart redis
|
||||
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
|
||||
|
|
|
@ -12,16 +12,49 @@ Thank you for your interest in contributing to Suna! This document outlines the
|
|||
|
||||
## Development Setup
|
||||
|
||||
### Quick Setup
|
||||
|
||||
The easiest way to get started is using our setup wizard:
|
||||
|
||||
```bash
|
||||
python setup.py
|
||||
```
|
||||
|
||||
This will guide you through configuring all required services and dependencies.
|
||||
|
||||
### Detailed Setup Instructions
|
||||
|
||||
For detailed setup instructions, please refer to:
|
||||
|
||||
- [Backend Development Setup](backend/README.md)
|
||||
- [Frontend Development Setup](frontend/README.md)
|
||||
- [Self-Hosting Guide](docs/SELF-HOSTING.md) - Complete setup instructions
|
||||
- [Backend Development Setup](backend/README.md) - Backend-specific development
|
||||
- [Frontend Development Setup](frontend/README.md) - Frontend-specific development
|
||||
|
||||
### Required Services
|
||||
|
||||
Before contributing, ensure you have access to:
|
||||
|
||||
**Required:**
|
||||
|
||||
- Supabase project (database and auth)
|
||||
- LLM provider API key (OpenAI, Anthropic, or OpenRouter)
|
||||
- Daytona account (for agent execution)
|
||||
- Tavily API key (for search)
|
||||
- Firecrawl API key (for web scraping)
|
||||
- QStash account (for background jobs)
|
||||
|
||||
**Optional:**
|
||||
|
||||
- RapidAPI key (for additional tools)
|
||||
- Smithery API key (for custom agents)
|
||||
|
||||
## Code Style Guidelines
|
||||
|
||||
- Follow existing code style and patterns
|
||||
- Use descriptive commit messages
|
||||
- Keep PRs focused on a single feature or fix
|
||||
- Add tests for new functionality
|
||||
- Update documentation as needed
|
||||
|
||||
## Reporting Issues
|
||||
|
||||
|
@ -32,3 +65,11 @@ When reporting issues, please include:
|
|||
- Actual behavior
|
||||
- Environment details (OS, Node/Docker versions, etc.)
|
||||
- Relevant logs or screenshots
|
||||
- Configuration details (redacted API keys)
|
||||
|
||||
## Development Tips
|
||||
|
||||
- Use the setup wizard to ensure consistent configuration
|
||||
- Check the troubleshooting section in the Self-Hosting Guide
|
||||
- Test both Docker and manual setup when making changes
|
||||
- Ensure your changes work with the latest setup.py configuration
|
||||
|
|
13
README.md
13
README.md
|
@ -80,15 +80,18 @@ Handles data persistence with authentication, user management, conversation hist
|
|||
|
||||
## Self-Hosting
|
||||
|
||||
Suna can be self-hosted on your own infrastructure using our setup wizard. For a comprehensive guide to self-hosting Suna, please refer to our [Self-Hosting Guide](./docs/SELF-HOSTING.md).
|
||||
Suna can be self-hosted on your own infrastructure using our comprehensive setup wizard. For a complete guide to self-hosting Suna, please refer to our [Self-Hosting Guide](./docs/SELF-HOSTING.md).
|
||||
|
||||
The setup process includes:
|
||||
|
||||
- Setting up a Supabase project for database and authentication
|
||||
- Configuring Redis for caching and session management
|
||||
- Setting up Daytona for secure agent execution
|
||||
- Integrating with LLM providers (Anthropic, OpenAI, Groq, etc.)
|
||||
- Configuring web search and scraping capabilities
|
||||
- Integrating with LLM providers (Anthropic, OpenAI, OpenRouter, etc.)
|
||||
- Configuring web search and scraping capabilities (Tavily, Firecrawl)
|
||||
- Setting up QStash for background job processing and workflows
|
||||
- Configuring webhook handling for automated tasks
|
||||
- Optional integrations (RapidAPI, Smithery for custom agents)
|
||||
|
||||
### Quick Start
|
||||
|
||||
|
@ -105,6 +108,8 @@ cd suna
|
|||
python setup.py
|
||||
```
|
||||
|
||||
The wizard will guide you through 14 steps with progress saving, so you can resume if interrupted.
|
||||
|
||||
3. **Start or stop the containers**:
|
||||
|
||||
```bash
|
||||
|
@ -138,7 +143,9 @@ We welcome contributions from the community! Please see our [Contributing Guide]
|
|||
- [Anthropic](https://www.anthropic.com/) - LLM provider
|
||||
- [Tavily](https://tavily.com/) - Search capabilities
|
||||
- [Firecrawl](https://firecrawl.dev/) - Web scraping capabilities
|
||||
- [QStash](https://upstash.com/qstash) - Background job processing and workflows
|
||||
- [RapidAPI](https://rapidapi.com/) - API services
|
||||
- [Smithery](https://smithery.ai/) - Custom agent development
|
||||
|
||||
## License
|
||||
|
||||
|
|
|
@ -1,5 +1,16 @@
|
|||
# Suna Backend
|
||||
|
||||
## Quick Setup
|
||||
|
||||
The easiest way to get your backend configured is to use the setup wizard from the project root:
|
||||
|
||||
```bash
|
||||
cd .. # Navigate to project root if you're in the backend directory
|
||||
python setup.py
|
||||
```
|
||||
|
||||
This will configure all necessary environment variables and services automatically.
|
||||
|
||||
## Running the backend
|
||||
|
||||
Within the backend directory, run the following command to stop and start the backend:
|
||||
|
@ -32,10 +43,13 @@ For local development, you might only need to run Redis and RabbitMQ, while work
|
|||
- You want to avoid rebuilding the API container on every change
|
||||
- You're running the API service directly on your machine
|
||||
|
||||
To run just Redis and RabbitMQ for development:```bash
|
||||
docker compose up redis rabbitmq
|
||||
To run just Redis and RabbitMQ for development:
|
||||
|
||||
Then you can run your API service locally with the following commands
|
||||
```bash
|
||||
docker compose up redis rabbitmq
|
||||
```
|
||||
|
||||
Then you can run your API service locally with the following commands:
|
||||
|
||||
```sh
|
||||
# On one terminal
|
||||
|
@ -49,6 +63,58 @@ uv run dramatiq --processes 4 --threads 4 run_agent_background
|
|||
|
||||
### Environment Configuration
|
||||
|
||||
The setup wizard automatically creates a `.env` file with all necessary configuration. If you need to configure manually or understand the setup:
|
||||
|
||||
#### Required Environment Variables
|
||||
|
||||
```sh
|
||||
# Environment Mode
|
||||
ENV_MODE=local
|
||||
|
||||
# Database (Supabase)
|
||||
SUPABASE_URL=https://your-project.supabase.co
|
||||
SUPABASE_ANON_KEY=your-anon-key
|
||||
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key
|
||||
|
||||
# Infrastructure
|
||||
REDIS_HOST=redis # Use 'localhost' when running API locally
|
||||
REDIS_PORT=6379
|
||||
RABBITMQ_HOST=rabbitmq # Use 'localhost' when running API locally
|
||||
RABBITMQ_PORT=5672
|
||||
|
||||
# LLM Providers (at least one required)
|
||||
ANTHROPIC_API_KEY=your-anthropic-key
|
||||
OPENAI_API_KEY=your-openai-key
|
||||
OPENROUTER_API_KEY=your-openrouter-key
|
||||
MODEL_TO_USE=anthropic/claude-sonnet-4-20250514
|
||||
|
||||
# Search and Web Scraping
|
||||
TAVILY_API_KEY=your-tavily-key
|
||||
FIRECRAWL_API_KEY=your-firecrawl-key
|
||||
FIRECRAWL_URL=https://api.firecrawl.dev
|
||||
|
||||
# Agent Execution
|
||||
DAYTONA_API_KEY=your-daytona-key
|
||||
DAYTONA_SERVER_URL=https://app.daytona.io/api
|
||||
DAYTONA_TARGET=us
|
||||
|
||||
# Background Job Processing (Required)
|
||||
QSTASH_URL=https://qstash.upstash.io
|
||||
QSTASH_TOKEN=your-qstash-token
|
||||
QSTASH_CURRENT_SIGNING_KEY=your-current-signing-key
|
||||
QSTASH_NEXT_SIGNING_KEY=your-next-signing-key
|
||||
WEBHOOK_BASE_URL=https://yourdomain.com
|
||||
|
||||
# MCP Configuration
|
||||
MCP_CREDENTIAL_ENCRYPTION_KEY=your-generated-encryption-key
|
||||
|
||||
# Optional APIs
|
||||
RAPID_API_KEY=your-rapidapi-key
|
||||
SMITHERY_API_KEY=your-smithery-key
|
||||
|
||||
NEXT_PUBLIC_URL=http://localhost:3000
|
||||
```
|
||||
|
||||
When running services individually, make sure to:
|
||||
|
||||
1. Check your `.env` file and adjust any necessary environment variables
|
||||
|
@ -65,7 +131,7 @@ When running the API locally with Redis in Docker, you need to set the correct R
|
|||
|
||||
### Important: RabbitMQ Host Configuration
|
||||
|
||||
When running the API locally with Redis in Docker, you need to set the correct RabbitMQ host in your `.env` file:
|
||||
When running the API locally with RabbitMQ in Docker, you need to set the correct RabbitMQ host in your `.env` file:
|
||||
|
||||
- For Docker-to-Docker communication (when running both services in Docker): use `RABBITMQ_HOST=rabbitmq`
|
||||
- For local-to-Docker communication (when running API locally): use `RABBITMQ_HOST=localhost`
|
||||
|
@ -73,11 +139,11 @@ When running the API locally with Redis in Docker, you need to set the correct R
|
|||
Example `.env` configuration for local development:
|
||||
|
||||
```sh
|
||||
REDIS_HOST=localhost (instead of 'redis')
|
||||
REDIS_HOST=localhost # (instead of 'redis')
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=
|
||||
|
||||
RABBITMQ_HOST=localhost (instead of 'rabbitmq')
|
||||
RABBITMQ_HOST=localhost # (instead of 'rabbitmq')
|
||||
RABBITMQ_PORT=5672
|
||||
```
|
||||
|
||||
|
@ -103,16 +169,19 @@ python setup.py <command> [arguments]
|
|||
#### Available Commands
|
||||
|
||||
**Enable a feature flag:**
|
||||
|
||||
```bash
|
||||
python setup.py enable test_flag "Test decsription"
|
||||
```
|
||||
|
||||
**Disable a feature flag:**
|
||||
|
||||
```bash
|
||||
python setup.py disable test_flag
|
||||
```
|
||||
|
||||
**List all feature flags:**
|
||||
|
||||
```bash
|
||||
python setup.py list
|
||||
```
|
||||
|
@ -122,16 +191,19 @@ python setup.py list
|
|||
Feature flags are accessible via REST API:
|
||||
|
||||
**Get all feature flags:**
|
||||
|
||||
```bash
|
||||
GET /feature-flags
|
||||
```
|
||||
|
||||
**Get specific feature flag:**
|
||||
|
||||
```bash
|
||||
GET /feature-flags/{flag_name}
|
||||
```
|
||||
|
||||
Example response:
|
||||
|
||||
```json
|
||||
{
|
||||
"test_flag": {
|
||||
|
|
|
@ -41,7 +41,7 @@ async def run_agent(
|
|||
thread_manager: Optional[ThreadManager] = None,
|
||||
native_max_auto_continues: int = 25,
|
||||
max_iterations: int = 100,
|
||||
model_name: str = "anthropic/claude-3-7-sonnet-latest",
|
||||
model_name: str = "anthropic/claude-sonnet-4-20250514",
|
||||
enable_thinking: Optional[bool] = False,
|
||||
reasoning_effort: Optional[str] = 'low',
|
||||
enable_context_manager: bool = True,
|
||||
|
|
|
@ -1,303 +0,0 @@
|
|||
import sentry_sdk
|
||||
import asyncio
|
||||
import json
|
||||
import traceback
|
||||
from datetime import datetime, timezone
|
||||
from typing import Optional, Dict, Any
|
||||
from services import redis
|
||||
from workflows.executor import WorkflowExecutor
|
||||
from workflows.deterministic_executor import DeterministicWorkflowExecutor
|
||||
from workflows.models import WorkflowDefinition
|
||||
from utils.logger import logger
|
||||
import dramatiq
|
||||
import uuid
|
||||
from services.supabase import DBConnection
|
||||
from dramatiq.brokers.rabbitmq import RabbitmqBroker
|
||||
import os
|
||||
from utils.retry import retry
|
||||
|
||||
rabbitmq_host = os.getenv('RABBITMQ_HOST', 'rabbitmq')
|
||||
rabbitmq_port = int(os.getenv('RABBITMQ_PORT', 5672))
|
||||
rabbitmq_broker = RabbitmqBroker(host=rabbitmq_host, port=rabbitmq_port, middleware=[dramatiq.middleware.AsyncIO()])
|
||||
dramatiq.set_broker(rabbitmq_broker)
|
||||
|
||||
|
||||
_initialized = False
|
||||
db = DBConnection()
|
||||
workflow_executor = WorkflowExecutor(db)
|
||||
deterministic_executor = DeterministicWorkflowExecutor(db)
|
||||
instance_id = "workflow_worker"
|
||||
|
||||
async def initialize():
|
||||
"""Initialize the workflow worker with resources."""
|
||||
global db, workflow_executor, instance_id, _initialized
|
||||
|
||||
if not instance_id:
|
||||
instance_id = str(uuid.uuid4())[:8]
|
||||
|
||||
await retry(lambda: redis.initialize_async())
|
||||
await db.initialize()
|
||||
|
||||
_initialized = True
|
||||
logger.info(f"Initialized workflow worker with instance ID: {instance_id}")
|
||||
|
||||
@dramatiq.actor
|
||||
async def run_workflow_background(
|
||||
execution_id: str,
|
||||
workflow_id: str,
|
||||
workflow_name: str,
|
||||
workflow_definition: Dict[str, Any],
|
||||
variables: Optional[Dict[str, Any]] = None,
|
||||
triggered_by: str = "MANUAL",
|
||||
project_id: Optional[str] = None,
|
||||
thread_id: Optional[str] = None,
|
||||
agent_run_id: Optional[str] = None,
|
||||
deterministic: bool = True
|
||||
):
|
||||
"""Run a workflow in the background using Dramatiq."""
|
||||
try:
|
||||
await initialize()
|
||||
except Exception as e:
|
||||
logger.critical(f"Failed to initialize workflow worker: {e}")
|
||||
raise e
|
||||
|
||||
run_lock_key = f"workflow_run_lock:{execution_id}"
|
||||
|
||||
lock_acquired = await redis.set(run_lock_key, instance_id, nx=True, ex=redis.REDIS_KEY_TTL)
|
||||
|
||||
if not lock_acquired:
|
||||
existing_instance = await redis.get(run_lock_key)
|
||||
if existing_instance:
|
||||
logger.info(f"Workflow execution {execution_id} is already being processed by instance {existing_instance.decode() if isinstance(existing_instance, bytes) else existing_instance}. Skipping duplicate execution.")
|
||||
return
|
||||
else:
|
||||
lock_acquired = await redis.set(run_lock_key, instance_id, nx=True, ex=redis.REDIS_KEY_TTL)
|
||||
if not lock_acquired:
|
||||
logger.info(f"Workflow execution {execution_id} is already being processed by another instance. Skipping duplicate execution.")
|
||||
return
|
||||
|
||||
sentry_sdk.set_tag("workflow_id", workflow_id)
|
||||
sentry_sdk.set_tag("execution_id", execution_id)
|
||||
|
||||
logger.info(f"Starting background workflow execution: {execution_id} for workflow: {workflow_name} (Instance: {instance_id})")
|
||||
logger.info(f"🔄 Triggered by: {triggered_by}")
|
||||
|
||||
client = await db.client
|
||||
start_time = datetime.now(timezone.utc)
|
||||
total_responses = 0
|
||||
pubsub = None
|
||||
stop_checker = None
|
||||
stop_signal_received = False
|
||||
|
||||
# Define Redis keys and channels - use agent_run pattern if agent_run_id provided for frontend compatibility
|
||||
if agent_run_id:
|
||||
response_list_key = f"agent_run:{agent_run_id}:responses"
|
||||
response_channel = f"agent_run:{agent_run_id}:new_response"
|
||||
instance_control_channel = f"agent_run:{agent_run_id}:control:{instance_id}"
|
||||
global_control_channel = f"agent_run:{agent_run_id}:control"
|
||||
instance_active_key = f"active_run:{instance_id}:{agent_run_id}"
|
||||
else:
|
||||
# Fallback to workflow execution pattern
|
||||
response_list_key = f"workflow_execution:{execution_id}:responses"
|
||||
response_channel = f"workflow_execution:{execution_id}:new_response"
|
||||
instance_control_channel = f"workflow_execution:{execution_id}:control:{instance_id}"
|
||||
global_control_channel = f"workflow_execution:{execution_id}:control"
|
||||
instance_active_key = f"active_workflow:{instance_id}:{execution_id}"
|
||||
|
||||
async def check_for_stop_signal():
|
||||
nonlocal stop_signal_received
|
||||
if not pubsub: return
|
||||
try:
|
||||
while not stop_signal_received:
|
||||
message = await pubsub.get_message(ignore_subscribe_messages=True, timeout=0.5)
|
||||
if message and message.get("type") == "message":
|
||||
data = message.get("data")
|
||||
if isinstance(data, bytes): data = data.decode('utf-8')
|
||||
if data == "STOP":
|
||||
logger.info(f"Received STOP signal for workflow execution {execution_id} (Instance: {instance_id})")
|
||||
stop_signal_received = True
|
||||
break
|
||||
if total_responses % 50 == 0:
|
||||
try: await redis.expire(instance_active_key, redis.REDIS_KEY_TTL)
|
||||
except Exception as ttl_err: logger.warning(f"Failed to refresh TTL for {instance_active_key}: {ttl_err}")
|
||||
await asyncio.sleep(0.1)
|
||||
except asyncio.CancelledError:
|
||||
logger.info(f"Stop signal checker cancelled for {execution_id} (Instance: {instance_id})")
|
||||
except Exception as e:
|
||||
logger.error(f"Error in stop signal checker for {execution_id}: {e}", exc_info=True)
|
||||
stop_signal_received = True
|
||||
|
||||
try:
|
||||
pubsub = await redis.create_pubsub()
|
||||
try:
|
||||
await retry(lambda: pubsub.subscribe(instance_control_channel, global_control_channel))
|
||||
except Exception as e:
|
||||
logger.error(f"Redis failed to subscribe to control channels: {e}", exc_info=True)
|
||||
raise e
|
||||
|
||||
logger.debug(f"Subscribed to control channels: {instance_control_channel}, {global_control_channel}")
|
||||
stop_checker = asyncio.create_task(check_for_stop_signal())
|
||||
await redis.set(instance_active_key, "running", ex=redis.REDIS_KEY_TTL)
|
||||
|
||||
await client.table('workflow_executions').update({
|
||||
"status": "running",
|
||||
"started_at": start_time.isoformat()
|
||||
}).eq('id', execution_id).execute()
|
||||
|
||||
workflow = WorkflowDefinition(**workflow_definition)
|
||||
|
||||
if not thread_id:
|
||||
thread_id = str(uuid.uuid4())
|
||||
|
||||
final_status = "running"
|
||||
error_message = None
|
||||
pending_redis_operations = []
|
||||
|
||||
if deterministic:
|
||||
executor = deterministic_executor
|
||||
logger.info(f"Using deterministic executor for workflow {execution_id}")
|
||||
else:
|
||||
executor = workflow_executor
|
||||
logger.info(f"Using legacy executor for workflow {execution_id}")
|
||||
|
||||
async for response in executor.execute_workflow(
|
||||
workflow=workflow,
|
||||
variables=variables,
|
||||
thread_id=thread_id,
|
||||
project_id=project_id
|
||||
):
|
||||
if stop_signal_received:
|
||||
logger.info(f"Workflow execution {execution_id} stopped by signal.")
|
||||
final_status = "stopped"
|
||||
break
|
||||
|
||||
response_json = json.dumps(response)
|
||||
pending_redis_operations.append(asyncio.create_task(redis.rpush(response_list_key, response_json)))
|
||||
pending_redis_operations.append(asyncio.create_task(redis.publish(response_channel, "new")))
|
||||
total_responses += 1
|
||||
|
||||
if response.get('type') == 'workflow_status':
|
||||
status_val = response.get('status')
|
||||
if status_val in ['completed', 'failed', 'stopped']:
|
||||
logger.info(f"Workflow execution {execution_id} finished via status message: {status_val}")
|
||||
final_status = status_val
|
||||
if status_val == 'failed' or status_val == 'stopped':
|
||||
error_message = response.get('error', f"Workflow ended with status: {status_val}")
|
||||
break
|
||||
|
||||
if final_status == "running":
|
||||
final_status = "completed"
|
||||
duration = (datetime.now(timezone.utc) - start_time).total_seconds()
|
||||
logger.info(f"Workflow execution {execution_id} completed normally (duration: {duration:.2f}s, responses: {total_responses})")
|
||||
completion_message = {"type": "workflow_status", "status": "completed", "message": "Workflow execution completed successfully"}
|
||||
await redis.rpush(response_list_key, json.dumps(completion_message))
|
||||
await redis.publish(response_channel, "new")
|
||||
|
||||
await update_workflow_execution_status(client, execution_id, final_status, error=error_message, agent_run_id=agent_run_id)
|
||||
|
||||
control_signal = "END_STREAM" if final_status == "completed" else "ERROR" if final_status == "failed" else "STOP"
|
||||
try:
|
||||
await redis.publish(global_control_channel, control_signal)
|
||||
logger.debug(f"Published final control signal '{control_signal}' to {global_control_channel}")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to publish final control signal {control_signal}: {str(e)}")
|
||||
|
||||
except Exception as e:
|
||||
error_message = str(e)
|
||||
traceback_str = traceback.format_exc()
|
||||
duration = (datetime.now(timezone.utc) - start_time).total_seconds()
|
||||
logger.error(f"Error in workflow execution {execution_id} after {duration:.2f}s: {error_message}\n{traceback_str} (Instance: {instance_id})")
|
||||
final_status = "failed"
|
||||
|
||||
error_response = {"type": "workflow_status", "status": "error", "message": error_message}
|
||||
try:
|
||||
await redis.rpush(response_list_key, json.dumps(error_response))
|
||||
await redis.publish(response_channel, "new")
|
||||
except Exception as redis_err:
|
||||
logger.error(f"Failed to push error response to Redis for {execution_id}: {redis_err}")
|
||||
|
||||
await update_workflow_execution_status(client, execution_id, "failed", error=f"{error_message}\n{traceback_str}", agent_run_id=agent_run_id)
|
||||
try:
|
||||
await redis.publish(global_control_channel, "ERROR")
|
||||
logger.debug(f"Published ERROR signal to {global_control_channel}")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to publish ERROR signal: {str(e)}")
|
||||
|
||||
finally:
|
||||
if stop_checker and not stop_checker.done():
|
||||
stop_checker.cancel()
|
||||
try: await stop_checker
|
||||
except asyncio.CancelledError: pass
|
||||
except Exception as e: logger.warning(f"Error during stop_checker cancellation: {e}")
|
||||
|
||||
if pubsub:
|
||||
try:
|
||||
await pubsub.unsubscribe()
|
||||
await pubsub.close()
|
||||
logger.debug(f"Closed pubsub connection for {execution_id}")
|
||||
except Exception as e:
|
||||
logger.warning(f"Error closing pubsub for {execution_id}: {str(e)}")
|
||||
|
||||
await _cleanup_redis_response_list(execution_id, agent_run_id)
|
||||
await _cleanup_redis_instance_key(execution_id, agent_run_id)
|
||||
await _cleanup_redis_run_lock(execution_id)
|
||||
|
||||
try:
|
||||
await asyncio.wait_for(asyncio.gather(*pending_redis_operations), timeout=30.0)
|
||||
except asyncio.TimeoutError:
|
||||
logger.warning(f"Timeout waiting for pending Redis operations for {execution_id}")
|
||||
|
||||
logger.info(f"Workflow execution background task fully completed for: {execution_id} (Instance: {instance_id}) with final status: {final_status}")
|
||||
|
||||
async def update_workflow_execution_status(client, execution_id: str, status: str, error: Optional[str] = None, agent_run_id: Optional[str] = None):
|
||||
"""Update workflow execution status in database."""
|
||||
try:
|
||||
update_data = {
|
||||
"status": status,
|
||||
"completed_at": datetime.now(timezone.utc).isoformat() if status in ['completed', 'failed', 'stopped'] else None,
|
||||
"error": error
|
||||
}
|
||||
|
||||
await client.table('workflow_executions').update(update_data).eq('id', execution_id).execute()
|
||||
logger.info(f"Updated workflow execution {execution_id} status to {status}")
|
||||
|
||||
# Also update agent_runs table if agent_run_id provided (for frontend streaming compatibility)
|
||||
if agent_run_id:
|
||||
await client.table('agent_runs').update(update_data).eq('id', agent_run_id).execute()
|
||||
logger.info(f"Updated agent run {agent_run_id} status to {status}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to update workflow execution status: {e}")
|
||||
|
||||
async def _cleanup_redis_response_list(execution_id: str, agent_run_id: Optional[str] = None):
|
||||
"""Set TTL on workflow execution response list."""
|
||||
try:
|
||||
if agent_run_id:
|
||||
response_list_key = f"agent_run:{agent_run_id}:responses"
|
||||
else:
|
||||
response_list_key = f"workflow_execution:{execution_id}:responses"
|
||||
await redis.expire(response_list_key, redis.REDIS_KEY_TTL)
|
||||
logger.debug(f"Set TTL on {response_list_key}")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to set TTL on response list for {execution_id}: {e}")
|
||||
|
||||
async def _cleanup_redis_instance_key(execution_id: str, agent_run_id: Optional[str] = None):
|
||||
"""Remove instance-specific active run key."""
|
||||
try:
|
||||
if agent_run_id:
|
||||
instance_active_key = f"active_run:{instance_id}:{agent_run_id}"
|
||||
else:
|
||||
instance_active_key = f"active_workflow:{instance_id}:{execution_id}"
|
||||
await redis.delete(instance_active_key)
|
||||
logger.debug(f"Cleaned up instance key {instance_active_key}")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to clean up instance key for {execution_id}: {e}")
|
||||
|
||||
async def _cleanup_redis_run_lock(execution_id: str):
|
||||
"""Remove workflow execution lock."""
|
||||
try:
|
||||
run_lock_key = f"workflow_run_lock:{execution_id}"
|
||||
await redis.delete(run_lock_key)
|
||||
logger.debug(f"Cleaned up run lock {run_lock_key}")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to clean up run lock for {execution_id}: {e}")
|
|
@ -123,7 +123,7 @@ class Configuration:
|
|||
AWS_REGION_NAME: Optional[str] = None
|
||||
|
||||
# Model configuration
|
||||
MODEL_TO_USE: Optional[str] = "anthropic/claude-3-7-sonnet-latest"
|
||||
MODEL_TO_USE: Optional[str] = "anthropic/claude-sonnet-4-20250514"
|
||||
|
||||
# Supabase configuration
|
||||
SUPABASE_URL: str
|
||||
|
|
|
@ -60,8 +60,8 @@ class WorkflowConverter:
|
|||
|
||||
logger.info(f"Final enabled_tools list: {enabled_tools}")
|
||||
|
||||
# Extract model from input node configuration, default to Claude 3.5 Sonnet
|
||||
selected_model = "anthropic/claude-3-5-sonnet-latest"
|
||||
# Extract model from input node configuration, default to Claude Sonnet 4
|
||||
selected_model = "anthropic/claude-sonnet-4-20250514"
|
||||
if input_config:
|
||||
# Look for model in input node data
|
||||
for node in nodes:
|
||||
|
|
|
@ -125,7 +125,7 @@ class DeterministicWorkflowExecutor:
|
|||
thread_id=thread_id,
|
||||
project_id=project_id,
|
||||
stream=True,
|
||||
model_name="anthropic/claude-3-5-sonnet-latest",
|
||||
model_name="anthropic/claude-sonnet-4-20250514",
|
||||
enable_thinking=False,
|
||||
reasoning_effort="low",
|
||||
enable_context_manager=True,
|
||||
|
@ -1182,7 +1182,7 @@ class DeterministicWorkflowExecutor:
|
|||
thread_id=thread_id,
|
||||
project_id=project_id,
|
||||
stream=True,
|
||||
model_name="anthropic/claude-3-5-sonnet-latest",
|
||||
model_name="anthropic/claude-sonnet-4-20250514",
|
||||
enable_thinking=False,
|
||||
reasoning_effort="low",
|
||||
enable_context_manager=True,
|
||||
|
|
|
@ -51,7 +51,7 @@ class WorkflowExecutor:
|
|||
|
||||
main_step = workflow.steps[0]
|
||||
system_prompt = main_step.config.get("system_prompt", "")
|
||||
selected_model = main_step.config.get("model", "anthropic/claude-3-5-sonnet-latest")
|
||||
selected_model = main_step.config.get("model", "anthropic/claude-sonnet-4-20250514")
|
||||
|
||||
if variables:
|
||||
variables_text = "\n\n## Workflow Variables\n"
|
||||
|
|
|
@ -55,9 +55,13 @@ Obtain the following API keys:
|
|||
- **Agent Execution**:
|
||||
- [Daytona](https://app.daytona.io/) - For secure agent execution
|
||||
|
||||
- **Background Job Processing**:
|
||||
- [QStash](https://console.upstash.com/qstash) - For workflows, automated tasks, and webhook handling
|
||||
|
||||
#### Optional
|
||||
|
||||
- **RapidAPI** - For accessing additional API services (optional)
|
||||
- **RapidAPI** - For accessing additional API services (enables LinkedIn scraping and other tools)
|
||||
- **Smithery** - For custom agents and workflows ([Get API key](https://smithery.ai/))
|
||||
|
||||
### 3. Required Software
|
||||
|
||||
|
@ -99,6 +103,8 @@ The wizard will:
|
|||
- Install dependencies
|
||||
- Start Suna using your preferred method
|
||||
|
||||
The setup wizard has 14 steps and includes progress saving, so you can resume if interrupted.
|
||||
|
||||
### 3. Supabase Configuration
|
||||
|
||||
During setup, you'll need to:
|
||||
|
@ -122,6 +128,14 @@ As part of the setup, you'll need to:
|
|||
- Image name: `kortix/suna:0.1.3`
|
||||
- Entrypoint: `/usr/bin/supervisord -n -c /etc/supervisor/conf.d/supervisord.conf`
|
||||
|
||||
### 5. QStash Configuration
|
||||
|
||||
QStash is required for background job processing, workflows, and webhook handling:
|
||||
|
||||
1. Create an account at [Upstash Console](https://console.upstash.com/qstash)
|
||||
2. Get your QStash token and signing keys
|
||||
3. Configure a publicly accessible webhook base URL for workflow callbacks
|
||||
|
||||
## Manual Configuration
|
||||
|
||||
If you prefer to configure your installation manually, or if you need to modify the configuration after installation, here's what you need to know:
|
||||
|
@ -154,7 +168,8 @@ RABBITMQ_PORT=5672
|
|||
# LLM Providers
|
||||
ANTHROPIC_API_KEY=your-anthropic-key
|
||||
OPENAI_API_KEY=your-openai-key
|
||||
MODEL_TO_USE=anthropic/claude-3-7-sonnet-latest
|
||||
OPENROUTER_API_KEY=your-openrouter-key
|
||||
MODEL_TO_USE=anthropic/claude-sonnet-4-20250514
|
||||
|
||||
# WEB SEARCH
|
||||
TAVILY_API_KEY=your-tavily-key
|
||||
|
@ -168,6 +183,20 @@ DAYTONA_API_KEY=your-daytona-key
|
|||
DAYTONA_SERVER_URL=https://app.daytona.io/api
|
||||
DAYTONA_TARGET=us
|
||||
|
||||
# Background job processing (Required)
|
||||
QSTASH_URL=https://qstash.upstash.io
|
||||
QSTASH_TOKEN=your-qstash-token
|
||||
QSTASH_CURRENT_SIGNING_KEY=your-current-signing-key
|
||||
QSTASH_NEXT_SIGNING_KEY=your-next-signing-key
|
||||
WEBHOOK_BASE_URL=https://yourdomain.com
|
||||
|
||||
# MCP Configuration
|
||||
MCP_CREDENTIAL_ENCRYPTION_KEY=your-generated-encryption-key
|
||||
|
||||
# Optional APIs
|
||||
RAPID_API_KEY=your-rapidapi-key
|
||||
SMITHERY_API_KEY=your-smithery-key
|
||||
|
||||
NEXT_PUBLIC_URL=http://localhost:3000
|
||||
```
|
||||
|
||||
|
@ -183,8 +212,9 @@ Example configuration:
|
|||
```sh
|
||||
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
|
||||
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
|
||||
NEXT_PUBLIC_BACKEND_URL=http://backend:8000/api
|
||||
NEXT_PUBLIC_BACKEND_URL=http://localhost:8000/api
|
||||
NEXT_PUBLIC_URL=http://localhost:3000
|
||||
NEXT_PUBLIC_ENV_MODE=LOCAL
|
||||
```
|
||||
|
||||
## Post-Installation Steps
|
||||
|
@ -262,9 +292,21 @@ uv run dramatiq run_agent_background
|
|||
- Check for API usage limits or restrictions
|
||||
|
||||
4. **Daytona connection issues**
|
||||
|
||||
- Verify Daytona API key
|
||||
- Check if the container image is correctly configured
|
||||
|
||||
5. **QStash/Webhook issues**
|
||||
|
||||
- Verify QStash token and signing keys
|
||||
- Ensure webhook base URL is publicly accessible
|
||||
- Check QStash console for delivery status
|
||||
|
||||
6. **Setup wizard issues**
|
||||
|
||||
- Delete `.setup_progress` file to reset the setup wizard
|
||||
- Check that all required tools are installed and accessible
|
||||
|
||||
### Logs
|
||||
|
||||
To view logs and diagnose issues:
|
||||
|
@ -286,6 +328,16 @@ cd backend
|
|||
uv run dramatiq run_agent_background
|
||||
```
|
||||
|
||||
### Resuming Setup
|
||||
|
||||
If the setup wizard is interrupted, you can resume from where you left off by running:
|
||||
|
||||
```bash
|
||||
python setup.py
|
||||
```
|
||||
|
||||
The wizard will detect your progress and continue from the last completed step.
|
||||
|
||||
---
|
||||
|
||||
For further assistance, join the [Suna Discord Community](https://discord.gg/Py6pCBUUPw) or check the [GitHub repository](https://github.com/kortix-ai/suna) for updates and issues.
|
||||
|
|
|
@ -1,7 +1,36 @@
|
|||
# Suna frontend
|
||||
# Suna Frontend
|
||||
|
||||
## Quick Setup
|
||||
|
||||
The easiest way to get your frontend configured is to use the setup wizard from the project root:
|
||||
|
||||
```bash
|
||||
cd .. # Navigate to project root if you're in the frontend directory
|
||||
python setup.py
|
||||
```
|
||||
|
||||
This will configure all necessary environment variables automatically.
|
||||
|
||||
## Environment Configuration
|
||||
|
||||
The setup wizard automatically creates a `.env.local` file with the following configuration:
|
||||
|
||||
```sh
|
||||
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
|
||||
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
|
||||
NEXT_PUBLIC_BACKEND_URL=http://localhost:8000/api
|
||||
NEXT_PUBLIC_URL=http://localhost:3000
|
||||
NEXT_PUBLIC_ENV_MODE=LOCAL
|
||||
```
|
||||
|
||||
## Getting Started
|
||||
|
||||
Install dependencies:
|
||||
|
||||
```bash
|
||||
npm install
|
||||
```
|
||||
|
||||
Run the development server:
|
||||
|
||||
```bash
|
||||
|
@ -19,3 +48,10 @@ Run the production server:
|
|||
```bash
|
||||
npm run start
|
||||
```
|
||||
|
||||
## Development Notes
|
||||
|
||||
- The frontend connects to the backend API at `http://localhost:8000/api`
|
||||
- Supabase is used for authentication and database operations
|
||||
- The app runs on `http://localhost:3000` by default
|
||||
- Environment variables are automatically configured by the setup wizard
|
||||
|
|
2
setup.py
2
setup.py
|
@ -700,7 +700,7 @@ class SetupWizard:
|
|||
elif self.env_vars["llm"].get("ANTHROPIC_API_KEY"):
|
||||
self.env_vars["llm"][
|
||||
"MODEL_TO_USE"
|
||||
] = "anthropic/claude-3-5-sonnet-latest"
|
||||
] = "anthropic/claude-sonnet-4-20250514"
|
||||
elif self.env_vars["llm"].get("OPENROUTER_API_KEY"):
|
||||
self.env_vars["llm"][
|
||||
"MODEL_TO_USE"
|
||||
|
|
Loading…
Reference in New Issue