mirror of https://github.com/kortix-ai/suna.git
README
This commit is contained in:
parent
9935daabe3
commit
65a350f21f
83
README.md
83
README.md
|
@ -2,47 +2,16 @@
|
||||||
|
|
||||||
AgentPress is a collection of _simple, but powerful_ utilities that serve as building blocks for creating AI agents. *Plug, play, and customize.*
|
AgentPress is a collection of _simple, but powerful_ utilities that serve as building blocks for creating AI agents. *Plug, play, and customize.*
|
||||||
|
|
||||||
## How It Works
|

|
||||||
|
|
||||||
Each AI agent iteration follows a clear, modular flow:
|
See [How It Works](#how-it-works) for an explanation of this flow.
|
||||||
|
|
||||||
1. **Message & LLM Handling**
|
## Core Components
|
||||||
- Messages are managed in threads via `ThreadManager`
|
- **Threads**: Manage Messages[] as threads.
|
||||||
- LLM API calls are made through a unified interface (`llm.py`)
|
- **Tools**: Register code as callable tools with definitions in both OpenAPI and XML
|
||||||
- Supports streaming responses for real-time interaction
|
- **Response Processing**: Support for both native-LLM OpenAPI and XML-based tool calling
|
||||||
|
|
||||||
2. **Response Processing**
|
|
||||||
- LLM returns both content and tool calls
|
|
||||||
- Content is streamed in real-time
|
|
||||||
- Tool calls are parsed using either:
|
|
||||||
- Standard OpenAPI function calling
|
|
||||||
- XML-based tool definitions
|
|
||||||
- Custom parsers (extend `ToolParserBase`)
|
|
||||||
|
|
||||||
3. **Tool Execution**
|
|
||||||
- Tools are executed either:
|
|
||||||
- In real-time during streaming (`execute_tools_on_stream`)
|
|
||||||
- After complete response
|
|
||||||
- In parallel or sequential order
|
|
||||||
- Supports both standard and XML tool formats
|
|
||||||
- Extensible through `ToolExecutorBase`
|
|
||||||
|
|
||||||
4. **Results Management**
|
|
||||||
- Results from both content and tool executions are handled
|
|
||||||
- Supports different result formats (standard/XML)
|
|
||||||
- Customizable through `ResultsAdderBase`
|
|
||||||
|
|
||||||
This modular architecture allows you to:
|
|
||||||
- Use standard OpenAPI function calling
|
|
||||||
- Switch to XML-based tool definitions
|
|
||||||
- Create custom processors by extending base classes
|
|
||||||
- Mix and match different approaches
|
|
||||||
|
|
||||||
- **Threads**: Simple message thread handling utilities with streaming support
|
|
||||||
- **Tools**: Flexible tool definition with both OpenAPI and XML formats
|
|
||||||
- **State Management**: Thread-safe JSON key-value state management
|
- **State Management**: Thread-safe JSON key-value state management
|
||||||
- **LLM Integration**: Provider-agnostic LLM calls via LiteLLM
|
- **LLM**: +100 LLMs using the OpenAI I/O Format powered by LiteLLM
|
||||||
- **Response Processing**: Support for both standard and XML-based tool calling
|
|
||||||
|
|
||||||
## Installation & Setup
|
## Installation & Setup
|
||||||
|
|
||||||
|
@ -66,6 +35,8 @@ Check out [File Overview](#file-overview) for explanations of the generated file
|
||||||
- Creates a `workspace` directory for the agent to work in
|
- Creates a `workspace` directory for the agent to work in
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
||||||
1. Set up your environment variables in a `.env` file:
|
1. Set up your environment variables in a `.env` file:
|
||||||
|
@ -175,6 +146,42 @@ asyncio.run(main())
|
||||||
streamlit run agentpress/thread_viewer_ui.py
|
streamlit run agentpress/thread_viewer_ui.py
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
Each AI agent iteration follows a clear, modular flow:
|
||||||
|
|
||||||
|
1. **Message & LLM Handling**
|
||||||
|
- Messages are managed in threads via `ThreadManager`
|
||||||
|
- LLM API calls are made through a unified interface (`llm.py`)
|
||||||
|
- Supports streaming responses for real-time interaction
|
||||||
|
|
||||||
|
2. **Response Processing**
|
||||||
|
- LLM returns both content and tool calls
|
||||||
|
- Content is streamed in real-time
|
||||||
|
- Tool calls are parsed using either:
|
||||||
|
- Standard OpenAPI function calling
|
||||||
|
- XML-based tool definitions
|
||||||
|
- Custom parsers (extend `ToolParserBase`)
|
||||||
|
|
||||||
|
3. **Tool Execution**
|
||||||
|
- Tools are executed either:
|
||||||
|
- In real-time during streaming (`execute_tools_on_stream`)
|
||||||
|
- After complete response
|
||||||
|
- In parallel or sequential order
|
||||||
|
- Supports both standard and XML tool formats
|
||||||
|
- Extensible through `ToolExecutorBase`
|
||||||
|
|
||||||
|
4. **Results Management**
|
||||||
|
- Results from both content and tool executions are handled
|
||||||
|
- Supports different result formats (standard/XML)
|
||||||
|
- Customizable through `ResultsAdderBase`
|
||||||
|
|
||||||
|
This modular architecture allows you to:
|
||||||
|
- Use standard OpenAPI function calling
|
||||||
|
- Switch to XML-based tool definitions
|
||||||
|
- Create custom processors by extending base classes
|
||||||
|
- Mix and match different approaches
|
||||||
|
|
||||||
## File Overview
|
## File Overview
|
||||||
|
|
||||||
### Core Components
|
### Core Components
|
||||||
|
|
Binary file not shown.
After Width: | Height: | Size: 50 KiB |
Loading…
Reference in New Issue