README 0.1.8

This commit is contained in:
marko-kraemer 2024-11-18 07:14:40 +01:00
parent b20c074ede
commit 9935daabe3
5 changed files with 155 additions and 513 deletions

View File

@ -1,9 +1,14 @@
0.1.8
- Tool Parser Base Class
- Tool Executor Base Class
- Execute_tools_on_stream execute tools while the response is streaming
- Docstring docs
- Added base processor classes for extensible tool handling:
- ToolParserBase: Abstract base class for parsing LLM responses
- ToolExecutorBase: Abstract base class for tool execution strategies
- ResultsAdderBase: Abstract base class for managing results
- Added dual support for OpenAPI and XML tool calling patterns:
- XML schema decorator for XML-based tool definitions
- XML-specific processors for parsing and execution
- Standard processors for OpenAPI function calling
- Enhanced streaming capabilities:
- execute_tools_on_stream: Execute tools in real-time during streaming
0.1.7
- Streaming Responses with Tool Calls
- v1 streaming responses

196
README.md
View File

@ -2,10 +2,47 @@
AgentPress is a collection of _simple, but powerful_ utilities that serve as building blocks for creating AI agents. *Plug, play, and customize.*
- **Threads**: Simple message thread handling utilities
- **Tools**: Flexible tool definition and automatic execution
- **State Management**: Simple JSON key-value state management
## How It Works
Each AI agent iteration follows a clear, modular flow:
1. **Message & LLM Handling**
- Messages are managed in threads via `ThreadManager`
- LLM API calls are made through a unified interface (`llm.py`)
- Supports streaming responses for real-time interaction
2. **Response Processing**
- LLM returns both content and tool calls
- Content is streamed in real-time
- Tool calls are parsed using either:
- Standard OpenAPI function calling
- XML-based tool definitions
- Custom parsers (extend `ToolParserBase`)
3. **Tool Execution**
- Tools are executed either:
- In real-time during streaming (`execute_tools_on_stream`)
- After complete response
- In parallel or sequential order
- Supports both standard and XML tool formats
- Extensible through `ToolExecutorBase`
4. **Results Management**
- Results from both content and tool executions are handled
- Supports different result formats (standard/XML)
- Customizable through `ResultsAdderBase`
This modular architecture allows you to:
- Use standard OpenAPI function calling
- Switch to XML-based tool definitions
- Create custom processors by extending base classes
- Mix and match different approaches
- **Threads**: Simple message thread handling utilities with streaming support
- **Tools**: Flexible tool definition with both OpenAPI and XML formats
- **State Management**: Thread-safe JSON key-value state management
- **LLM Integration**: Provider-agnostic LLM calls via LiteLLM
- **Response Processing**: Support for both standard and XML-based tool calling
## Installation & Setup
@ -19,7 +56,7 @@ pip install agentpress
agentpress init
```
Creates a `agentpress` directory with all the core utilities.
Check out [File Overview](#file-overview) for explanations of the generated util files.
Check out [File Overview](#file-overview) for explanations of the generated files.
3. If you selected the example agent during initialization:
- Creates an `agent.py` file with a web development agent example
@ -31,24 +68,31 @@ Check out [File Overview](#file-overview) for explanations of the generated util
## Quick Start
1. Set up your environment variables (API keys, etc.) in a `.env` file.
- OPENAI_API_KEY, ANTHROPIC_API_KEY, GROQ_API_KEY, etc... Whatever LLM you want to use, we use LiteLLM (https://litellm.ai) (Call 100+ LLMs using the OpenAI Input/Output Format) set it up in your `.env` file.. Also check out the agentpress/llm.py and modify as needed to support your wanted LLM.
1. Set up your environment variables in a `.env` file:
```bash
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
GROQ_API_KEY=your_key_here
```
2. Create a calculator_tool.py
2. Create a calculator tool with OpenAPI schema:
```python
from agentpress.tool import Tool, ToolResult, tool_schema
from agentpress.tool import Tool, ToolResult, openapi_schema
class CalculatorTool(Tool):
@tool_schema({
"name": "add",
"description": "Add two numbers",
"parameters": {
"type": "object",
"properties": {
"a": {"type": "number"},
"b": {"type": "number"}
},
"required": ["a", "b"]
@openapi_schema({
"type": "function",
"function": {
"name": "add",
"description": "Add two numbers",
"parameters": {
"type": "object",
"properties": {
"a": {"type": "number"},
"b": {"type": "number"}
},
"required": ["a", "b"]
}
}
})
async def add(self, a: float, b: float) -> ToolResult:
@ -59,7 +103,29 @@ class CalculatorTool(Tool):
return self.fail_response(f"Failed to add numbers: {str(e)}")
```
3. Use the Thread Manager, create a new thread or access an existing one. Then Add the Calculator Tool, and run the thread. It will automatically use & execute the python function associated with the tool:
3. Or create a tool with XML schema:
```python
from agentpress.tool import Tool, ToolResult, xml_schema
class FilesTool(Tool):
@xml_schema(
tag_name="create-file",
mappings=[
{"param_name": "file_path", "node_type": "attribute", "path": "."},
{"param_name": "file_contents", "node_type": "content", "path": "."}
],
example='''
<create-file file_path="path/to/file">
File contents go here
</create-file>
'''
)
async def create_file(self, file_path: str, file_contents: str) -> ToolResult:
# Implementation here
pass
```
4. Use the Thread Manager with streaming and tool execution:
```python
import asyncio
from agentpress.thread_manager import ThreadManager
@ -71,67 +137,93 @@ async def main():
manager.add_tool(CalculatorTool)
# Create a new thread
# Alternatively, you could use an existing thread_id like:
# thread_id = "existing-thread-uuid"
thread_id = await manager.create_thread()
# Add your custom logic here
# Add your message
await manager.add_message(thread_id, {
"role": "user",
"content": "What's 2 + 2?"
})
# Run with streaming and tool execution
response = await manager.run_thread(
thread_id=thread_id,
system_message={
"role": "system",
"content": "You are a helpful assistant with calculation abilities."
},
model_name="gpt-4",
use_tools=True,
execute_tool_calls=True
model_name="anthropic/claude-3-5-sonnet-latest",
stream=True,
native_tool_calling=True,
execute_tools=True,
execute_tools_on_stream=True
)
print("Response:", response)
# Handle streaming response
if isinstance(response, AsyncGenerator):
async for chunk in response:
if hasattr(chunk.choices[0], 'delta'):
delta = chunk.choices[0].delta
if hasattr(delta, 'content') and delta.content:
print(delta.content, end='', flush=True)
asyncio.run(main())
```
4. Autonomous Web Developer Agent (the standard example)
When you run `agentpress init` and select the example agent you will get code for a simple implementation of an AI Web Developer Agent that leverages architecture similar to platforms like our own [Softgen](https://softgen.ai/) Platform.
- **Files Tool**: Allows the agent to create, read, update, and delete files within the workspace.
- **Terminal Tool**: Enables the agent to execute terminal commands.
- **State Workspace Management**: The agent has access to a workspace whose state is stored and sent on every request. This state includes all file contents, ensuring the agent knows what it is editing.
- **User Interaction via CLI**: After each action, the agent pauses and allows the user to provide further instructions through the CLI.
You can find the complete implementation in our [example-agent](agentpress/examples/example-agent/agent.py) directory.
5. Thread Viewer
Run the thread viewer to view messages of threads in a stylised web UI:
5. View conversation threads in a web UI:
```bash
streamlit run agentpress/thread_viewer_ui.py
```
## File Overview
### agentpress/llm.py
Core LLM API interface using LiteLLM. Supports 100+ LLMs using the OpenAI Input/Output Format. Easy to extend for custom model configurations and API endpoints. `make_llm_api_call()` can be imported to make LLM calls.
### Core Components
### agentpress/thread_manager.py
Orchestrates conversations between users, LLMs, and tools. Manages message history and automatically handles tool execution when LLMs request them. Tools registered here become available for LLM function calls.
#### agentpress/llm.py
LLM API interface using LiteLLM. Supports 100+ LLMs with OpenAI-compatible format. Includes streaming, retry logic, and error handling.
### agentpress/tool.py
Base infrastructure for LLM-compatible tools. Inherit from `Tool` class and use `@tool_schema` decorator to create tools that are automatically registered for LLM function calling. Returns standardized `ToolResult` responses.
#### agentpress/thread_manager.py
Manages conversation threads with support for:
- Message history management
- Tool registration and execution
- Streaming responses
- Both OpenAPI and XML tool calling patterns
### agentpress/tool_registry.py
Central registry for tool management. Keeps track of available tools and their schemas, allowing selective function registration. Works with `thread_manager.py` to expose tools to LLMs.
#### agentpress/tool.py
Base infrastructure for tools with:
- OpenAPI schema decorator for standard function calling
- XML schema decorator for XML-based tool calls
- Standardized ToolResult responses
### agentpress/state_manager.py
Simple key-value based state persistence using JSON files. For maintaining environment state, settings, or other persistent data.
#### agentpress/tool_registry.py
Central registry for tool management:
- Registers both OpenAPI and XML tools
- Maintains tool schemas and implementations
- Provides tool lookup and validation
#### agentpress/state_manager.py
Thread-safe state persistence:
- JSON-based key-value storage
- Atomic operations with locking
- Automatic file handling
### Response Processing
#### agentpress/llm_response_processor.py
Handles LLM response processing with support for:
- Streaming and complete responses
- Tool call extraction and execution
- Result formatting and message management
#### Standard Processing
- `standard_tool_parser.py`: Parses OpenAPI function calls
- `standard_tool_executor.py`: Executes standard tool calls
- `standard_results_adder.py`: Manages standard results
#### XML Processing
- `xml_tool_parser.py`: Parses XML-formatted tool calls
- `xml_tool_executor.py`: Executes XML tool calls
- `xml_results_adder.py`: Manages XML results
## Philosophy
- **Plug & Play**: Start with our defaults, then customize to your needs.
@ -160,7 +252,7 @@ pip install poetry
poetry install
```
3. For quick testing, you can install directly from the current directory:
3. For quick testing:
```bash
pip install -e .
```

View File

@ -1,94 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Modern Landing Page</title>
<link rel="stylesheet" href="styles.css">
</head>
<body>
<header>
<nav class="navbar">
<div class="logo">Brand</div>
<ul class="nav-links">
<li><a href="#home">Home</a></li>
<li><a href="#features">Features</a></li>
<li><a href="#contact">Contact</a></li>
</ul>
<div class="hamburger">
<span></span>
<span></span>
<span></span>
</div>
</nav>
</header>
<main>
<section id="home" class="hero">
<div class="hero-content">
<h1>Welcome to the Future</h1>
<p>Experience innovation at its finest</p>
<button class="cta-button">Get Started</button>
</div>
</section>
<section id="testimonials" class="testimonials">
<h2>What Our Clients Say</h2>
<div class="testimonial-grid">
<div class="testimonial-card">
<div class="testimonial-avatar">👤</div>
<p class="testimonial-text">"Amazing service! The team went above and beyond."</p>
<p class="testimonial-author">- John Doe, CEO</p>
</div>
<div class="testimonial-card">
<div class="testimonial-avatar">👤</div>
<p class="testimonial-text">"Incredible results. Would highly recommend!"</p>
<p class="testimonial-author">- Jane Smith, Designer</p>
</div>
<div class="testimonial-card">
<div class="testimonial-avatar">👤</div>
<p class="testimonial-text">"Professional and efficient service."</p>
<p class="testimonial-author">- Mike Johnson, Developer</p>
</div>
</div>
</section>
<section id="features" class="features">
<h2>Our Features</h2>
<div class="feature-grid">
<div class="feature-card">
<div class="feature-icon">🚀</div>
<h3>Fast Performance</h3>
<p>Lightning-quick loading times</p>
</div>
<div class="feature-card">
<div class="feature-icon">🎨</div>
<h3>Beautiful Design</h3>
<p>Stunning visuals and animations</p>
</div>
<div class="feature-card">
<div class="feature-icon">📱</div>
<h3>Responsive</h3>
<p>Works on all devices</p>
</div>
</div>
</section>
<section id="contact" class="contact">
<h2>Contact Us</h2>
<form id="contact-form">
<input type="text" placeholder="Name" required>
<input type="email" placeholder="Email" required>
<textarea placeholder="Message" required></textarea>
<button type="submit">Send Message</button>
</form>
</section>
</main>
<footer>
<p>&copy; 2024 Brand. All rights reserved.</p>
</footer>
<script src="script.js"></script>
</body>
</html>

View File

@ -1,69 +0,0 @@
document.addEventListener('DOMContentLoaded', () => {
const handleIntersection = (entries, observer) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
entry.target.classList.add('animate');
observer.unobserve(entry.target);
}
});
};
const observerOptions = {
threshold: 0.2,
rootMargin: '0px'
};
const animationObserver = new IntersectionObserver(handleIntersection, observerOptions);
document.querySelectorAll('.feature-card, .testimonial-card').forEach(element => {
animationObserver.observe(element);
});
const hamburger = document.querySelector('.hamburger');
const navLinks = document.querySelector('.nav-links');
const links = document.querySelectorAll('.nav-links a');
hamburger.addEventListener('click', () => {
navLinks.classList.toggle('active');
});
links.forEach(link => {
link.addEventListener('click', (e) => {
e.preventDefault();
const targetId = link.getAttribute('href');
const targetSection = document.querySelector(targetId);
targetSection.scrollIntoView({
behavior: 'smooth'
});
if (window.innerWidth <= 768) {
navLinks.style.display = 'none';
}
});
});
const contactForm = document.getElementById('contact-form');
contactForm.addEventListener('submit', (e) => {
e.preventDefault();
const formData = new FormData(contactForm);
const formObject = Object.fromEntries(formData);
alert('Message sent successfully!');
contactForm.reset();
});
const observer = new IntersectionObserver((entries) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
entry.target.style.opacity = '1';
entry.target.style.transform = 'translateY(0)';
}
});
}, { threshold: 0.1 });
document.querySelectorAll('.feature-card').forEach(card => {
card.style.opacity = '0';
card.style.transform = 'translateY(20px)';
observer.observe(card);
});
});

View File

@ -1,292 +0,0 @@
:root {
--primary-color: #2563eb;
--secondary-color: #1e40af;
--text-color: #1f2937;
--background-color: #ffffff;
--accent-color: #dbeafe;
}
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
line-height: 1.6;
color: var(--text-color);
}
.navbar {
display: flex;
justify-content: space-between;
align-items: center;
padding: 1rem 5%;
position: fixed;
width: 100%;
background: rgba(255, 255, 255, 0.95);
backdrop-filter: blur(10px);
z-index: 1000;
}
.logo {
font-size: 1.5rem;
font-weight: bold;
color: var(--primary-color);
}
.nav-links {
display: flex;
gap: 2rem;
list-style: none;
}
@media (max-width: 768px) {
.nav-links {
position: fixed;
top: 70px;
left: 0;
right: 0;
flex-direction: column;
background: rgba(255, 255, 255, 0.98);
padding: 2rem;
gap: 1.5rem;
text-align: center;
transform: translateY(-100%);
transition: transform 0.3s ease;
}
.nav-links.active {
transform: translateY(0);
}
}
.nav-links a {
text-decoration: none;
color: var(--text-color);
transition: color 0.3s ease;
}
.nav-links a:hover {
color: var(--primary-color);
}
.hamburger {
display: none;
flex-direction: column;
gap: 4px;
cursor: pointer;
}
.hamburger span {
width: 25px;
height: 3px;
background: var(--text-color);
transition: 0.3s ease;
}
.hero {
min-height: 100vh;
display: flex;
align-items: center;
justify-content: center;
background: linear-gradient(135deg, var(--accent-color), var(--background-color));
padding: 2rem;
}
.hero-content {
text-align: center;
max-width: 800px;
}
.hero h1 {
font-size: 3.5rem;
margin-bottom: 1rem;
animation: fadeInUp 1s ease;
}
.hero p {
font-size: 1.25rem;
margin-bottom: 2rem;
animation: fadeInUp 1s ease 0.2s;
opacity: 0;
animation-fill-mode: forwards;
}
.cta-button {
padding: 1rem 2rem;
font-size: 1.1rem;
background: var(--primary-color);
color: white;
border: none;
border-radius: 5px;
cursor: pointer;
transition: background 0.3s ease;
animation: fadeInUp 1s ease 0.4s;
opacity: 0;
animation-fill-mode: forwards;
}
.cta-button:hover {
background: var(--secondary-color);
}
.features {
padding: 5rem 2rem;
background: var(--background-color);
}
.features h2 {
text-align: center;
margin-bottom: 3rem;
}
.feature-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 2rem;
max-width: 1200px;
margin: 0 auto;
}
.feature-card {
padding: 2rem;
text-align: center;
background: white;
border-radius: 10px;
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
transition: transform 0.3s ease;
}
.feature-card:hover {
transform: translateY(-5px);
}
.testimonials {
padding: 5rem 2rem;
background: linear-gradient(135deg, var(--accent-color) 0%, var(--background-color) 100%);
}
.testimonial-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
gap: 2rem;
max-width: 1200px;
margin: 0 auto;
}
.testimonial-card {
background: white;
padding: 2rem;
border-radius: 10px;
box-shadow: 0 4px 15px rgba(0, 0, 0, 0.1);
text-align: center;
transition: transform 0.3s ease;
}
.testimonial-card:hover {
transform: translateY(-5px);
}
.testimonial-avatar {
font-size: 3rem;
margin-bottom: 1rem;
}
.testimonial-text {
font-style: italic;
margin-bottom: 1rem;
color: var(--text-color);
}
.testimonial-author {
font-weight: bold;
color: var(--primary-color);
}
.feature-icon {
font-size: 2.5rem;
margin-bottom: 1rem;
}
.contact {
padding: 5rem 2rem;
background: var(--accent-color);
}
.contact h2 {
text-align: center;
margin-bottom: 3rem;
}
#contact-form {
display: flex;
flex-direction: column;
gap: 1rem;
max-width: 600px;
margin: 0 auto;
}
#contact-form input,
#contact-form textarea {
padding: 1rem;
border: 1px solid #ddd;
border-radius: 5px;
font-size: 1rem;
}
#contact-form textarea {
height: 150px;
resize: vertical;
}
#contact-form button {
padding: 1rem;
background: var(--primary-color);
color: white;
border: none;
border-radius: 5px;
cursor: pointer;
transition: background 0.3s ease;
}
#contact-form button:hover {
background: var(--secondary-color);
}
footer {
text-align: center;
padding: 2rem;
background: var(--text-color);
color: white;
}
@keyframes fadeInUp {
from {
opacity: 0;
transform: translateY(20px);
}
to {
opacity: 1;
transform: translateY(0);
}
}
@media (max-width: 768px) {
.nav-links {
display: none;
}
.hamburger {
display: flex;
}
.hero h1 {
font-size: 2.5rem;
}
.feature-grid {
grid-template-columns: 1fr;
}
}