This commit is contained in:
marko-kraemer 2024-10-23 03:42:38 +02:00
parent 3f69ea9cc4
commit 8a407efc27
10 changed files with 287 additions and 247 deletions

21
LICENSE Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2024 Kortix AI Corp
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

223
README.md
View File

@ -1,79 +1,178 @@
# agentpress
# AgentPress
AgentPress simplifies the process of creating AI agents by providing a robust thread management system and a flexible tool integration mechanism. With AgentPress, you can easily create, configure, and run AI agents that can engage in conversations, perform tasks, and interact with various tools.
AgentPress is a powerful framework for creating AI agents, with the ThreadManager at its core. This system simplifies the process of building, configuring, and running AI agents that can engage in conversations, perform tasks, and interact with various tools.
### Key Features
## Key Concept: ThreadManager
- **Thread Management System**: Manage conversations and task executions through a sophisticated thread system.
- **Flexible Tool Integration**: Easily integrate and use custom tools within your AI agents.
- **Configurable Agent Behavior**: Fine-tune your agent's behavior with customizable settings and callbacks.
- **Autonomous Iterations**: Allow your agent to run multiple iterations autonomously.
- **State Management**: Control your agent's behavior at different stages of execution.
The ThreadManager is the central component of AgentPress. It manages conversation threads, handles tool integrations, and coordinates the execution of AI models. Here's why it's crucial:
### Getting Started
1. **Conversation Management**: It creates and manages threads, allowing for coherent multi-turn conversations.
2. **Tool Integration**: It integrates various tools that the AI can use to perform tasks.
3. **Model Execution**: It handles the execution of AI models, managing the context and responses.
4. **State Management**: It maintains the state of conversations and tool executions across multiple turns.
To get started with AgentPress, all you need to do is write your agent similar to the example in `agent.py`. Here's a basic outline:
## How It Works
1. Import necessary modules:
```python
from agentpress.db import Database
from agentpress.thread_manager import ThreadManager
from tools.files_tool import FilesTool
```
1. **Create a ThreadManager**: This is your first step in using AgentPress.
2. **Add Tools**: Register any tools your agent might need.
3. **Create a Thread**: Each conversation or task execution is managed in a thread.
4. **Run the Thread**: Execute the AI model within the context of the thread, optionally using tools.
2. Create a ThreadManager instance:
```python
db = Database()
manager = ThreadManager(db)
```
## Standalone Example
3. Set up your agent's configuration:
```python
settings = {
"thread_id": thread_id,
"system_message": system_message,
"model_name": "gpt-4",
"temperature": 0.7,
"max_tokens": 150,
"autonomous_iterations_amount": 3,
"continue_instructions": "Continue the conversation...",
"tools": list(tool_schemas.keys()),
"tool_choice": "auto"
}
```
Here's how to use the ThreadManager standalone:
4. Define callback functions (optional):
```python
def initializer():
# Code to run at the start of the thread
def pre_iteration():
# Code to run before each iteration
def after_iteration():
# Code to run after each iteration
def finalizer():
# Code to run at the end of the thread
```
```python
import asyncio
from agentpress.thread_manager import ThreadManager
from tools.files_tool import FilesTool
5. Run your agent:
```python
response = await manager.run_thread(settings)
```
async def main():
# Create a ThreadManager instance
thread_manager = ThreadManager()
### Documentation
# Add a tool
thread_manager.add_tool(FilesTool)
The core of AgentPress is the `ThreadManager` class in `thread_manager.py`. It provides a comprehensive thread management system where you can:
# Create a new thread
thread_id = await thread_manager.create_thread()
- Create and manage threads
- Add messages to threads
- Run threads with specific settings
- Configure autonomous iterations
- Integrate and use tools
# Add an initial message to the thread
await thread_manager.add_message(thread_id, {"role": "user", "content": "Create a file named 'hello.txt' with the content 'Hello, World!'"})
Tools in AgentPress are based on the `Tool` class defined in `tool.py`. You can create custom tools by inheriting from this class and implementing the required methods. An example is the `FilesTool` in `files_tool.py`.
# Run the thread
response = await thread_manager.run_thread(
thread_id=thread_id,
system_message={"role": "system", "content": "You are a helpful assistant that can create and manage files."},
model_name="gpt-4",
temperature=0.7,
max_tokens=150,
tool_choice="auto"
)
For more detailed documentation, please refer to the comments in the source code files.
# Print the response
print(response)
# You can continue the conversation by adding more messages and running the thread again
await thread_manager.add_message(thread_id, {"role": "user", "content": "Now read the contents of 'hello.txt'"})
response = await thread_manager.run_thread(
thread_id=thread_id,
system_message={"role": "system", "content": "You are a helpful assistant that can create and manage files."},
model_name="gpt-4",
temperature=0.7,
max_tokens=150,
tool_choice="auto"
)
print(response)
if __name__ == "__main__":
asyncio.run(main())
```
This example demonstrates how to:
1. Create a ThreadManager
2. Add a tool (FilesTool)
3. Create a new thread
4. Add messages to the thread
5. Run the thread, which executes the AI model and potentially uses tools
6. Continue the conversation with additional messages and thread runs
## Building More Complex Agents
While the ThreadManager can be used standalone, it's also the foundation for building more complex agents. You can create custom agent behaviors by defining initialization, pre-iteration, post-iteration, and finalization steps, setting up loops for autonomous iterations, and implementing custom logic for when and how to run threads.
Here's an example of a more complex agent implementation using the `run_agent` function:
```python
async def run_agent(
thread_manager: ThreadManager,
thread_id: int,
max_iterations: int = 10
):
async def init():
# Initialization code here
pass
async def pre_iteration():
# Pre-iteration code here
pass
async def after_iteration():
# Post-iteration code here
await thread_manager.add_message(thread_id, {"role": "user", "content": "CREATE MORE RANDOM FILES WITH RANDOM CONTENTS. JUST CREATE IT NO QUESTIONS PLEASE."})
async def finalizer():
# Finalization code here
pass
await init()
iteration = 0
while iteration < max_iterations:
iteration += 1
await pre_iteration()
system_message = {"role": "system", "content": "You are a helpful assistant that can create, read, update, and delete files."}
model_name = "gpt-4"
response = await thread_manager.run_thread(
thread_id=thread_id,
system_message=system_message,
model_name=model_name,
temperature=0.7,
max_tokens=150,
tool_choice="auto",
additional_message=None,
execute_tools_async=False,
execute_model_tool_calls=True
)
await after_iteration()
await finalizer()
# Usage
if __name__ == "__main__":
async def main():
thread_manager = ThreadManager()
thread_id = await thread_manager.create_thread()
await thread_manager.add_message(thread_id, {"role": "user", "content": "Please create a file with a random name with the content 'Hello, world!'"})
thread_manager.add_tool(FilesTool)
await run_agent(
thread_manager=thread_manager,
thread_id=thread_id,
max_iterations=5
)
asyncio.run(main())
```
This more complex example shows how to:
1. Define custom behavior for different stages of the agent's execution
2. Set up a loop for multiple iterations
3. Use the ThreadManager within a larger agent structure
## Documentation
For more detailed information about the AgentPress components:
- `ThreadManager`: The core class that manages threads, tools, and model execution.
- `Tool`: Base class for creating custom tools that can be used by the AI.
- `ToolRegistry`: Manages the registration and retrieval of tools.
Refer to the comments in the source code files for comprehensive documentation on each component.
## Contributing
We welcome contributions to AgentPress! Please feel free to submit issues, fork the repository and send pull requests!
## License
[MIT License](LICENSE)
Built with ❤️ by [Kortix AI Corp](https://www.kortix.ai)

View File

@ -1,60 +1,67 @@
import asyncio
from agentpress.db import Database
from typing import Dict, Any
from agentpress.thread_manager import ThreadManager
from tools.files_tool import FilesTool
async def run_agent():
db = Database()
manager = ThreadManager(db)
async def run_agent(
thread_manager: ThreadManager,
thread_id: int,
max_iterations: int = 10
):
thread_id = await manager.create_thread()
await manager.add_message(thread_id, {"role": "user", "content": "Let's have a conversation about artificial intelligence and create a file summarizing our discussion."})
system_message = {"role": "system", "content": "You are an AI expert engaging in a conversation about artificial intelligence. You can also create and manage files."}
files_tool = FilesTool()
tool_schemas = files_tool.get_schemas()
async def init():
pass
def initializer():
print("Initializing thread run...")
manager.run_config['temperature'] = 0.8
async def pre_iteration():
pass
def pre_iteration():
print(f"Preparing iteration {manager.current_iteration}...")
manager.run_config['max_tokens'] = 200 if manager.current_iteration > 3 else 150
async def after_iteration():
await thread_manager.add_message(thread_id, {"role": "user", "content": "CREATE MORE RANDOM FILES WITH RANDOM CONTENTS. JSUT CREATE IT NO QUESTINS PLEASE.'"})
pass
def after_iteration():
print(f"Completed iteration {manager.current_iteration}. Status: {manager.run_config['status']}")
manager.run_config['continue_instructions'] = "Let's focus more on AI ethics in the next iteration and update our summary file."
async def finalizer():
pass
def finalizer():
print(f"Thread run finished with status: {manager.run_config['status']}")
print(f"Final configuration: {manager.run_config}")
await init()
settings = {
"thread_id": thread_id,
"system_message": system_message,
"model_name": "gpt-4",
"temperature": 0.7,
"max_tokens": 150,
"autonomous_iterations_amount": 3,
"continue_instructions": "Continue the conversation about AI, introducing new aspects or asking thought-provoking questions. Don't forget to update our summary file.",
"initializer": initializer,
"pre_iteration": pre_iteration,
"after_iteration": after_iteration,
"finalizer": finalizer,
"tools": list(tool_schemas.keys()),
"tool_choice": "auto"
}
iteration = 0
while iteration < max_iterations:
iteration += 1
await pre_iteration()
response = await manager.run_thread(settings)
print(f"Thread run response: {response}")
system_message = {"role": "system", "content": "You are a helpful assistant that can create, read, update, and delete files."}
model_name = "gpt-4o"
response = await thread_manager.run_thread(
thread_id=thread_id,
system_message=system_message,
model_name=model_name,
temperature=0.7,
max_tokens=150,
tool_choice="auto",
additional_message=None,
execute_tools_async=False,
execute_model_tool_calls=True
)
await after_iteration()
await finalizer()
messages = await manager.list_messages(thread_id)
print("\nFinal conversation:")
for msg in messages:
print(f"{msg['role'].capitalize()}: {msg['content']}")
if __name__ == "__main__":
asyncio.run(run_agent())
async def main():
thread_manager = ThreadManager()
thread_id = await thread_manager.create_thread()
await thread_manager.add_message(thread_id, {"role": "user", "content": "Please create a file with a random name with the content 'Hello, world!'"})
thread_manager.add_tool(FilesTool)
await run_agent(
thread_manager=thread_manager,
thread_id=thread_id,
max_iterations=5
)
asyncio.run(main())

View File

@ -26,7 +26,7 @@ os.environ['GROQ_API_KEY'] = GROQ_API_KEY
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
async def make_llm_api_call(messages, model_name, json_mode=False, temperature=0, max_tokens=None, tools=None, tool_choice="auto", api_key=None, api_base=None, agentops_session=None, stream=False, top_p=None, response_format=None) -> Union[Dict[str, Any], str]:
async def make_llm_api_call(messages, model_name, response_format=None, temperature=0, max_tokens=None, tools=None, tool_choice="auto", api_key=None, api_base=None, agentops_session=None, stream=False, top_p=None):
litellm.set_verbose = True
async def attempt_api_call(api_call_func, max_attempts=3):
@ -49,7 +49,7 @@ async def make_llm_api_call(messages, model_name, json_mode=False, temperature=0
"model": model_name,
"messages": messages,
"temperature": temperature,
"response_format": response_format or ({"type": "json_object"} if json_mode else None),
"response_format": response_format,
"top_p": top_p,
"stream": stream,
}
@ -129,4 +129,4 @@ if __name__ == "__main__":
# asyncio.run(test_llm_api_call(stream=True)) # For streaming
# asyncio.run(test_llm_api_call(stream=False)) # For non-streaming
asyncio.run(test_llm_api_call())
asyncio.run(test_llm_api_call())

BIN
main.db

Binary file not shown.

View File

@ -1,7 +1,7 @@
import os
import asyncio
from typing import Dict, Any
from agentpress.tool import Tool, ToolResult
from agentpress.tool import Tool, ToolResult, tool_schema
from agentpress.config import settings
class FilesTool(Tool):
@ -10,6 +10,18 @@ class FilesTool(Tool):
self.workspace = settings.workspace_dir
os.makedirs(self.workspace, exist_ok=True)
@tool_schema({
"name": "create_file",
"description": "Create a new file in the workspace",
"parameters": {
"type": "object",
"properties": {
"file_path": {"type": "string", "description": "The relative path of the file to create"},
"content": {"type": "string", "description": "The content to write to the file"}
},
"required": ["file_path", "content"]
}
})
async def create_file(self, file_path: str, content: str) -> ToolResult:
try:
full_path = os.path.join(self.workspace, file_path)
@ -22,6 +34,17 @@ class FilesTool(Tool):
except Exception as e:
return self.fail_response(f"Error creating file: {str(e)}")
@tool_schema({
"name": "read_file",
"description": "Read the contents of a file in the workspace",
"parameters": {
"type": "object",
"properties": {
"file_path": {"type": "string", "description": "The relative path of the file to read"}
},
"required": ["file_path"]
}
})
async def read_file(self, file_path: str) -> ToolResult:
try:
full_path = os.path.join(self.workspace, file_path)
@ -31,6 +54,18 @@ class FilesTool(Tool):
except Exception as e:
return self.fail_response(f"Error reading file: {str(e)}")
@tool_schema({
"name": "update_file",
"description": "Update the contents of a file in the workspace",
"parameters": {
"type": "object",
"properties": {
"file_path": {"type": "string", "description": "The relative path of the file to update"},
"content": {"type": "string", "description": "The new content to write to the file"}
},
"required": ["file_path", "content"]
}
})
async def update_file(self, file_path: str, content: str) -> ToolResult:
try:
full_path = os.path.join(self.workspace, file_path)
@ -40,6 +75,18 @@ class FilesTool(Tool):
except Exception as e:
return self.fail_response(f"Error updating file: {str(e)}")
@tool_schema({
"name": "delete_file",
"description": "Delete a file from the workspace",
"parameters": {
"type": "object",
"properties": {
"file_path": {"type": "string", "description": "The relative path of the file to delete"}
},
"required": ["file_path"]
}
})
async def delete_file(self, file_path: str) -> ToolResult:
try:
full_path = os.path.join(self.workspace, file_path)
@ -48,74 +95,6 @@ class FilesTool(Tool):
except Exception as e:
return self.fail_response(f"Error deleting file: {str(e)}")
def get_schemas(self) -> Dict[str, Dict[str, Any]]:
schemas = {
"create_file": {
"name": "create_file",
"description": "Create a new file in the workspace",
"parameters": {
"type": "object",
"properties": {
"file_path": {
"type": "string",
"description": "The relative path of the file to create"
},
"content": {
"type": "string",
"description": "The content to write to the file"
}
},
"required": ["file_path", "content"]
}
},
"read_file": {
"name": "read_file",
"description": "Read the contents of a file in the workspace",
"parameters": {
"type": "object",
"properties": {
"file_path": {
"type": "string",
"description": "The relative path of the file to read"
}
},
"required": ["file_path"]
}
},
"update_file": {
"name": "update_file",
"description": "Update the contents of a file in the workspace",
"parameters": {
"type": "object",
"properties": {
"file_path": {
"type": "string",
"description": "The relative path of the file to update"
},
"content": {
"type": "string",
"description": "The new content to write to the file"
}
},
"required": ["file_path", "content"]
}
},
"delete_file": {
"name": "delete_file",
"description": "Delete a file from the workspace",
"parameters": {
"type": "object",
"properties": {
"file_path": {
"type": "string",
"description": "The relative path of the file to delete"
}
},
"required": ["file_path"]
}
}
}
return {name: self.format_schema(schema) for name, schema in schemas.items()}
if __name__ == "__main__":
async def test_files_tool():
@ -150,4 +129,4 @@ if __name__ == "__main__":
read_deleted_result = await files_tool.read_file(test_file_path)
print("Read deleted file result:", read_deleted_result)
asyncio.run(test_files_tool())
asyncio.run(test_files_tool())

View File

@ -1,37 +0,0 @@
from typing import Dict, Any
from agentpress.tool import Tool, ToolResult
class ExampleTool(Tool):
description = "An example tool for demonstration purposes."
def __init__(self):
super().__init__()
async def example_function(self, input_text: str) -> ToolResult:
try:
processed_text = input_text.upper()
return self.success_response({
"original_text": input_text,
"processed_text": processed_text
})
except Exception as e:
return self.fail_response(f"Error processing input: {str(e)}")
def get_schemas(self) -> Dict[str, Dict[str, Any]]:
schemas = {
"example_function": {
"name": "example_function",
"description": "An example function that demonstrates the usage of the Tool class",
"parameters": {
"type": "object",
"properties": {
"input_text": {
"type": "string",
"description": "The text to be processed by the example function"
}
},
"required": ["input_text"]
}
}
}
return {name: self.format_schema(schema) for name, schema in schemas.items()}

View File

@ -1,5 +0,0 @@
Artificial Intelligence (AI) is a branch of computer science that aims to create systems capable of performing tasks that would typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, and language understanding.
AI can be categorized into Narrow AI, designed for a specific task like facial recognition, and General AI, which should be able to perform any intellectual task a human can do.
AI has a broad range of applications, from healthcare (where it can predict disease outbreaks) to finance (where can be used to detect fraudulent transactions).

View File

@ -1,14 +0,0 @@
from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def home():
return render_template('index.html')
@app.route('/about')
def about():
return render_template('about.html')
if __name__ == '__main__':
app.run(debug=True)

View File

@ -1,10 +0,0 @@
from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def home():
return render_template('index.html')
if __name__ == '__main__':
app.run(debug=True)