This commit is contained in:
marko-kraemer 2025-04-16 09:46:52 +01:00
commit 08695994c7
10 changed files with 945 additions and 38 deletions

201
LICENSE Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

253
README.md Normal file
View File

@ -0,0 +1,253 @@
<div align="center">
# Suna - Open Source Generalist AI Agent
(that acts on your behalf)
![Suna Screenshot](docs/images/suna_screenshot.png)
Suna is a fully open source AI assistant that helps you accomplish real-world tasks with ease. Through natural conversation, Suna becomes your digital companion for research, data analysis, and everyday challenges—combining powerful capabilities with an intuitive interface that understands what you need and delivers results.
Suna's powerful toolkit includes seamless browser automation to navigate the web and extract data, file management for document creation and editing, web crawling and extended search capabilities, command-line execution for system tasks, website deployment, and integration with various APIs and services. These capabilities work together harmoniously, allowing Suna to solve your complex problems and automate workflows through simple conversations!
[![License](https://img.shields.io/badge/License-Apache--2.0-blue)](./license)
[![Discord Follow](https://dcbadge.limes.pink/api/server/Py6pCBUUPw?style=flat)](https://discord.gg/Py6pCBUUPw)
[![Twitter Follow](https://img.shields.io/twitter/follow/kortixai)](https://x.com/kortixai)
[![GitHub Repo stars](https://img.shields.io/github/stars/kortix-ai/suna)](https://github.com/kortix-ai/suna)
[![Issues](https://img.shields.io/github/issues/kortix-ai/suna
)](https://github.com/kortix-ai/suna/labels/bug)
</div>
## Table of Contents
- [Suna Architecture](#project-architecture)
- [Backend API](#backend-api)
- [Frontend](#frontend)
- [Agent Docker](#agent-docker)
- [Supabase Database](#supabase-database)
- [Run Locally / Self-Hosting](#run-locally--self-hosting)
- [Requirements](#requirements)
- [Prerequisites](#prerequisites)
- [Installation Steps](#installation-steps)
- [License](#license)
## Project Architecture
![Architecture Diagram](docs/images/architecture_diagram.svg)
Suna consists of four main components:
### Backend API
Python/FastAPI service that handles REST endpoints, thread management, and LLM integration with OpenAI, Anthropic, and others via LiteLLM.
### Frontend
Next.js/React application providing a responsive UI with chat interface, dashboard, etc.
### Agent Docker
Isolated execution environment for every agent - with browser automation, code interpreter, file system access, tool integration, and security features.
### Supabase Database
Handles data persistence with authentication, user management, conversation history, file storage, agent state, analytics, and real-time subscriptions.
## Use Cases
1. **Competitor Analysis** ([Watch](https://suna.so/use-case/competitor-analysis)) - *"Analyze the market for my next company in the healthcare industry, located in the UK. Give me the major players, their market size, strengths, and weaknesses, and add their website URLs. Once done, generate a PDF report."*
2. **VC List** ([Watch](https://suna.so/use-case/vc-list)) - *"Give me the list of the most important VC Funds in the United States based on Assets Under Management. Give me website URLs, and if possible an email to reach them out."*
3. **Insurance Policies** ([Watch](https://suna.so/use-case/insurance-policies)) - *"Find me the best insurance policy per pricing for my house, located in Milan, Italy. Scrape Italian companies in the insurance market for houses."*
4. **Looking for Candidates** ([Watch](https://suna.so/use-case/looking-for-candidates)) - *"Go on LinkedIn, and find me 10 profiles available - they are not working right now - for a junior software engineer position, who are located in Munich, Germany. They should have at least one bachelor's degree in Computer Science or anything related to it, and 1-year of experience in any field/role."*
5. **Writing Report** ([Watch](https://suna.so/use-case/writing-report)) - *"Write me a detailed report about what's happened in the US stock market in the last 2 weeks. Analyze the S&P 500 trend, and tell me what the market is expecting to see in the upcoming weeks. This is a report analysis for a Bank CFO."*
6. **Product Reviews** ([Watch](https://suna.so/use-case/product-reviews)) - *"Go on Amazon, and find me the most common product issues related to the Nespresso Machine - you can find them by reading the reviews. Once done, write me a short report about common issues that could be converted into competitive advantage for a new Nespresso Machine."*
7. **Game Generation** ([Watch](https://suna.so/use-case/game-generation)) - *"Generate a Mini Game where the Player needs to drive a spaceship and fight against interstellar aliens. The aliens should be green, while the main player should be white. Make it with a 90' style."*
8. **Planning Company Trip** ([Watch](https://suna.so/use-case/planning-company-trip)) - *"Generate me a route plan for my company. We should go to California. We'll be in 8 people. Compose the trip from the departure (Paris, France) to the activities we can do considering that the trip will be 7 days long - departure on the 21st of Apr 2025. Check the weather forecast and temperature for the upcoming days, and based on that, you can plan our activities (outdoor vs indoor)."*
9. **Working on Excel** ([Watch](https://suna.so/use-case/working-on-excel)) - *"My company asked me to set up an Excel spreadsheet with all the information about Italian lottery games (Lotto, 10eLotto, and Million Day). Based on that, generate and send me a spreadsheet with all the basic information (public ones)."*
10. **Scraping Databases** ([Watch](https://suna.so/use-case/scraping-databases)) - *"Search for open tender databases (e.g. EU TED, US SAM.gov), find relevant procurement calls in the clean tech industry, summarize requirements, and generate a report about it."*
11. **Automate Event Speaker Prospecting** ([Watch](https://suna.so/use-case/automate-event-speaker-prospecting)) - *"Find 20 AI ethics speakers from Europe who've spoken at conferences in the past year. Scrapes conference sites, cross-references LinkedIn and YouTube, and outputs contact info + talk summaries."*
12. **Summarize and Cross-Reference Scientific Papers** ([Watch](https://suna.so/use-case/summarize-cross-reference-scientific-papers)) - *"Research and compare scientific papers talking about Alcohol effects on our bodies during the last 5 years. Generate a report about the most important scientific papers talking about the topic I wrote before."*
13. **Generating Leads** ([Watch](https://suna.so/use-case/generating-leads)) - *"I need to generate at least 20 B2B leads to reach out for my new AI tool. It's a customer support tool, then I'll need to have companies located in Spain, Barcelona, with 10-50 employees (find a way to get the number of employees). List me their names, websites, size (employees), and contact information if public."*
14. **Research + First Contact Draft** ([Watch](https://suna.so/use-case/research-first-contact-draft)) - *"Research my potential customers (B2B) on LinkedIn. They should be in the clean tech industry. Find their websites and their email addresses. After that, based on the company profile, generate a personalized first contact email where I present my company which is offering consulting services to cleantech companies to maximize their profits and reduce their costs."*
15. **SEO Analysis** ([Watch](https://suna.so/use-case/seo-analysis)) - *"Based on my website suna.so, generate an SEO report analysis, find top-ranking pages by keyword clusters, and identify topics I'm missing."*
16. **Clustering Public Reviews** ([Watch](https://suna.so/use-case/clustering-public-reviews)) - *"Clusterize public reviews for McDonald's by scraping them on public pages like Google Reviews, then generate a detailed report about the most common feedback and reviews (from 5 to 1 star), generate cluster to obtain insights about what can be improved and what is producing good results for the McDonald's brand."*
17. **Generate a Personal Trip** ([Watch](https://suna.so/use-case/generate-personal-trip)) - *"Generate a personal trip to London, with departure from Bangkok on the 1st of May. The trip will last 10 days. Find an accommodation in the center of London, with a rating on Google reviews of at least 4.5. Find me interesting outdoor activities to do during the journey. Generate a detailed itinerary plan."*
18. **Scrape and Monitor Stocks** ([Watch](https://suna.so/use-case/scrape-monitor-stocks)) - *"I want to monitor the 10 biggest public companies in Portugal. Scrape them on the internet, and find the public ones with the price/share available in the last 30 trading days. Generate a report based on the data you find."*
19. **Recently Funded Startups** ([Watch](https://suna.so/use-case/recently-funded-startups)) - *"Go on Crunchbase, Dealroom, and TechCrunch, filter by Series A funding rounds in the SaaS Finance Space, and build a report with company data, founders, and contact info for outbound sales."*
20. **Scrape Forum Discussions** ([Watch](https://suna.so/use-case/scrape-forum-discussions)) - *"I need to find the best beauty centers in Rome, but I want to find them by using open forums that speak about this topic. Go on Google, and scrape the forums by looking for beauty center discussions located in Rome. Then generate a list of 5 beauty centers with the best comments about them."*
## Run Locally / Self-Hosting
Suna can be self-hosted on your own infrastructure. Follow these steps to set up your own instance.
### Requirements
You'll need the following components:
- A Supabase project for database and authentication
- Redis database for caching and session management
- Daytona sandbox for secure agent execution
- Python 3.11 for the API backend
- API keys for LLM providers (OpenAI or Anthropic)
- (Optional but recommended) EXA API key for enhanced search capabilities
### Prerequisites
1. **Supabase**:
- Create a new [Supabase project](https://supabase.com/dashboard/projects)
- Save your project's API URL, anon key, and service role key for later use
- Install the [Supabase CLI](https://supabase.com/docs/guides/cli/getting-started)
2. **Redis**: Set up a Redis instance using one of these options:
- [Upstash Redis](https://upstash.com/) (recommended for cloud deployments)
- Local installation:
- [Mac](https://formulae.brew.sh/formula/redis): `brew install redis`
- [Linux](https://redis.io/docs/getting-started/installation/install-redis-on-linux/): Follow distribution-specific instructions
- [Windows](https://redis.io/docs/getting-started/installation/install-redis-on-windows/): Use WSL2 or Docker
- Save your Redis connection details for later use
3. **Daytona**:
- Create an account on [Daytona](https://www.daytona.io/)
- Generate an API key from your account settings
- Go to [Images](https://app.daytona.io/dashboard/images)
- Click "Add Image"
- Enter `adamcohenhillel/kortix-suna:0.0.13` as the image name
- Set `exec /usr/bin/supervisord -n -c /etc/supervisor/conf.d/supervisord.conf` as the Entrypoint
4. **LLM API Keys**:
- Obtain an API key from [OpenAI](https://platform.openai.com/) or [Anthropic](https://www.anthropic.com/)
- While other providers should work via [LiteLLM](https://github.com/BerriAI/litellm), OpenAI and Anthropic are recommended
5. **Search API Key** (Optional):
- For enhanced search capabilities, obtain an [Exa API key](https://dashboard.exa.ai/playground)
6. **RapidAPI API Key** (Optional):
- To enable API services like LinkedIn, and others, you'll need a RapidAPI key
- Each service requires individual activation in your RapidAPI account:
1. Locate the service's `base_url` in its corresponding file (e.g., `"https://linkedin-data-scraper.p.rapidapi.com"` in [`backend/agent/tools/api_services/LinkedInService.py`](backend/agent/tools/api_services/LinkedInService.py))
2. Visit that specific API on the RapidAPI marketplace
3. Subscribe to the service (many offer free tiers with limited requests)
4. Once subscribed, the service will be available to your agent through the API Services tool
### Installation Steps
1. **Clone the repository**:
```bash
git clone https://github.com/kortix-ai/suna.git
cd suna
```
2. **Configure backend environment**:
```bash
cd backend
cp .env.example .env # Create from example if available, or use the following template
```
Edit the `.env` file and fill in your credentials:
```bash
NEXT_PUBLIC_URL="http://localhost:3000"
# Supabase credentials from step 1
SUPABASE_URL=your_supabase_url
SUPABASE_ANON_KEY=your_supabase_anon_key
SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_role_key
# Redis credentials from step 2
REDIS_HOST=your_redis_host
REDIS_PORT=6379
REDIS_PASSWORD=your_redis_password
REDIS_SSL=True # Set to False for local Redis without SSL
# Daytona credentials from step 3
DAYTONA_API_KEY=your_daytona_api_key
DAYTONA_SERVER_URL="https://app.daytona.io/api"
DAYTONA_TARGET="us"
# Anthropic or OpenAI:
# Anthropic
ANTHROPIC_API_KEY=
MODEL_TO_USE="anthropic/claude-3-7-sonnet-latest"
# OR OpenAI API:
OPENAI_API_KEY=your_openai_api_key
MODEL_TO_USE="gpt-4o"
# Optional but recommended
EXA_API_KEY=your_exa_api_key # Optional
RAPID_API_KEY=
```
3. **Set up Supabase database**:
```bash
# Login to Supabase CLI
supabase login
# Link to your project (find your project reference in the Supabase dashboard)
supabase link --project-ref your_project_reference_id
# Push database migrations
supabase db push
```
4. **Configure frontend environment**:
```bash
cd ../frontend
cp .env.example .env.local # Create from example if available, or use the following template
```
Edit the `.env.local` file:
```
NEXT_PUBLIC_SUPABASE_URL=your_supabase_url
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key
NEXT_PUBLIC_BACKEND_URL="http://localhost:8000/api"
NEXT_PUBLIC_URL="http://localhost:3000"
```
5. **Install dependencies**:
```bash
# Install frontend dependencies
cd frontend
npm install
# Install backend dependencies
cd ../backend
pip install -r requirements.txt
```
6. **Start the application**:
In one terminal, start the frontend:
```bash
cd frontend
npm run dev
```
In another terminal, start the backend:
```bash
cd backend
python api.py
```
7. **Access Suna**:
- Open your browser and navigate to `http://localhost:3000`
- Sign up for an account using the Supabase authentication
- Start using your self-hosted Suna instance!
## License
Kortix Suna is licensed under the Apache License, Version 2.0. See [LICENSE](./LICENSE) for the full license text.

View File

@ -20,4 +20,5 @@ AWS_REGION_NAME=
DAYTONA_API_KEY=
DAYTONA_SERVER_URL=
DAYTONA_TARGET=
DAYTONA_TARGET=
MODEL_TO_USE="gpt-4o"

View File

@ -1,21 +0,0 @@
MIT License
Copyright (c) 2024 Kortix AI Corp
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,3 +1,4 @@
import os
import json
from uuid import uuid4
from typing import Optional, List, Any
@ -13,10 +14,10 @@ from agentpress.response_processor import ProcessorConfig
from agent.tools.sb_shell_tool import SandboxShellTool
from agent.tools.sb_files_tool import SandboxFilesTool
from agent.tools.sb_browser_tool import SandboxBrowserTool
from agent.tools.api_services_tool import APIServicesTool
from agent.prompt import get_system_prompt
from sandbox.sandbox import daytona, create_sandbox, get_or_start_sandbox
from sandbox.sandbox import create_sandbox, get_or_start_sandbox
from utils.billing import check_billing_status, get_account_id_from_thread
from utils.db import update_agent_run_status
load_dotenv()
@ -68,7 +69,12 @@ async def run_agent(thread_id: str, project_id: str, stream: bool = True, thread
thread_manager.add_tool(SandboxBrowserTool, sandbox=sandbox, thread_id=thread_id, thread_manager=thread_manager)
thread_manager.add_tool(SandboxDeployTool, sandbox=sandbox)
thread_manager.add_tool(MessageTool)
thread_manager.add_tool(WebSearchTool)
if os.getenv("EXA_API_KEY"):
thread_manager.add_tool(WebSearchTool)
if os.getenv("RAPID_API_KEY"):
thread_manager.add_tool(APIServicesTool)
xml_examples = ""
for tag_name, example in thread_manager.tool_registry.get_xml_examples().items():
@ -76,18 +82,6 @@ async def run_agent(thread_id: str, project_id: str, stream: bool = True, thread
system_message = { "role": "system", "content": get_system_prompt() + "\n\n" + f"<tool_examples>\n{xml_examples}\n</tool_examples>" }
model_name = "anthropic/claude-3-7-sonnet-latest"
# model_name = "groq/llama-3.3-70b-versatile"
# model_name = "openrouter/qwen/qwen2.5-vl-72b-instruct"
# model_name = "bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0"
# model_name = "anthropic/claude-3-5-sonnet-latest"
# model_name = "anthropic/claude-3-7-sonnet-latest"
# model_name = "openai/gpt-4o"
# model_name = "groq/deepseek-r1-distill-llama-70b"
# model_name = "bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0"
# model_name = "bedrock/anthropic.claude-3-7-sonnet-20250219-v1:0"
# model_name = "bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0"
iteration_count = 0
continue_execution = True
@ -151,7 +145,7 @@ async def run_agent(thread_id: str, project_id: str, stream: bool = True, thread
thread_id=thread_id,
system_prompt=system_message,
stream=stream,
llm_model=model_name,
llm_model=os.getenv("MODEL_TO_USE", "anthropic/claude-3-7-sonnet-latest"),
llm_temperature=0,
llm_max_tokens=128000,
tool_choice="auto",

View File

@ -0,0 +1,61 @@
import os
import requests
from typing import Dict, Any, Optional, TypedDict, Literal
class EndpointSchema(TypedDict):
route: str
method: Literal['GET', 'POST']
name: str
description: str
payload: Dict[str, Any]
class APIServicesBase:
def __init__(self, base_url: str, endpoints: Dict[str, EndpointSchema]):
self.base_url = base_url
self.endpoints = endpoints
def get_endpoints(self):
return self.endpoints
def call_endpoint(
self,
route: str,
payload: Optional[Dict[str, Any]] = None
):
"""
Call an API endpoint with the given parameters and data.
Args:
endpoint (EndpointSchema): The endpoint configuration dictionary
params (dict, optional): Query parameters for GET requests
payload (dict, optional): JSON payload for POST requests
Returns:
dict: The JSON response from the API
"""
if route.startswith("/"):
route = route[1:]
endpoint = self.endpoints.get(route)
if not endpoint:
raise ValueError(f"Endpoint {route} not found")
url = f"{self.base_url}{endpoint['route']}"
headers = {
"x-rapidapi-key": os.getenv("RAPID_API_KEY"),
"x-rapidapi-host": url.split("//")[1].split("/")[0],
"Content-Type": "application/json"
}
method = endpoint.get('method', 'GET').upper()
if method == 'GET':
response = requests.get(url, params=payload, headers=headers)
elif method == 'POST':
response = requests.post(url, json=payload, headers=headers)
else:
raise ValueError(f"Unsupported HTTP method: {method}")
return response.json()

View File

@ -0,0 +1,250 @@
from typing import Dict
from agent.tools.api_services.APIServicesBase import APIServicesBase, EndpointSchema
class LinkedInService(APIServicesBase):
def __init__(self):
endpoints: Dict[str, EndpointSchema] = {
"person": {
"route": "/person",
"method": "POST",
"name": "Person Data",
"description": "Fetches any Linkedin profiles data including skills, certificates, experiences, qualifications and much more.",
"payload": {
"link": "LinkedIn Profile URL"
}
},
"person_urn": {
"route": "/person_urn",
"method": "POST",
"name": "Person Data (Using Urn)",
"description": "It takes profile urn instead of profile public identifier in input",
"payload": {
"link": "LinkedIn Profile URL or URN"
}
},
"person_deep": {
"route": "/person_deep",
"method": "POST",
"name": "Person Data (Deep)",
"description": "Fetches all experiences, educations, skills, languages, publications... related to a profile.",
"payload": {
"link": "LinkedIn Profile URL"
}
},
"profile_updates": {
"route": "/profile_updates",
"method": "GET",
"name": "Person Posts (WITH PAGINATION)",
"description": "Fetches posts of a linkedin profile alongwith reactions, comments, postLink and reposts data.",
"payload": {
"profile_url": "LinkedIn Profile URL",
"page": "Page number",
"reposts": "Include reposts (1 or 0)",
"comments": "Include comments (1 or 0)"
}
},
"profile_recent_comments": {
"route": "/profile_recent_comments",
"method": "POST",
"name": "Person Recent Activity (Comments on Posts)",
"description": "Fetches 20 most recent comments posted by a linkedin user (per page).",
"payload": {
"profile_url": "LinkedIn Profile URL",
"page": "Page number",
"paginationToken": "Token for pagination"
}
},
"comments_from_recent_activity": {
"route": "/comments_from_recent_activity",
"method": "GET",
"name": "Comments from recent activity",
"description": "Fetches recent comments posted by a person as per his recent activity tab.",
"payload": {
"profile_url": "LinkedIn Profile URL",
"page": "Page number"
}
},
"person_skills": {
"route": "/person_skills",
"method": "POST",
"name": "Person Skills",
"description": "Scraper all skills of a linkedin user",
"payload": {
"link": "LinkedIn Profile URL"
}
},
"email_to_linkedin_profile": {
"route": "/email_to_linkedin_profile",
"method": "POST",
"name": "Email to LinkedIn Profile",
"description": "Finds LinkedIn profile associated with an email address",
"payload": {
"email": "Email address to search"
}
},
"company": {
"route": "/company",
"method": "POST",
"name": "Company Data",
"description": "Fetches LinkedIn company profile data",
"payload": {
"link": "LinkedIn Company URL"
}
},
"web_domain": {
"route": "/web-domain",
"method": "POST",
"name": "Web Domain to Company",
"description": "Fetches LinkedIn company profile data from a web domain",
"payload": {
"link": "Website domain (e.g., huzzle.app)"
}
},
"similar_profiles": {
"route": "/similar_profiles",
"method": "GET",
"name": "Similar Profiles",
"description": "Fetches profiles similar to a given LinkedIn profile",
"payload": {
"profileUrl": "LinkedIn Profile URL"
}
},
"company_jobs": {
"route": "/company_jobs",
"method": "POST",
"name": "Company Jobs",
"description": "Fetches job listings from a LinkedIn company page",
"payload": {
"company_url": "LinkedIn Company URL",
"count": "Number of job listings to fetch"
}
},
"company_updates": {
"route": "/company_updates",
"method": "GET",
"name": "Company Posts",
"description": "Fetches posts from a LinkedIn company page",
"payload": {
"company_url": "LinkedIn Company URL",
"page": "Page number",
"reposts": "Include reposts (0, 1, or 2)",
"comments": "Include comments (0, 1, or 2)"
}
},
"company_employee": {
"route": "/company_employee",
"method": "GET",
"name": "Company Employees",
"description": "Fetches employees of a LinkedIn company using company ID",
"payload": {
"companyId": "LinkedIn Company ID",
"page": "Page number"
}
},
"company_updates_post": {
"route": "/company_updates",
"method": "POST",
"name": "Company Posts (POST)",
"description": "Fetches posts from a LinkedIn company page with specific count parameters",
"payload": {
"company_url": "LinkedIn Company URL",
"posts": "Number of posts to fetch",
"comments": "Number of comments to fetch per post",
"reposts": "Number of reposts to fetch"
}
},
"search_posts_with_filters": {
"route": "/search_posts_with_filters",
"method": "GET",
"name": "Search Posts With Filters",
"description": "Searches LinkedIn posts with various filtering options",
"payload": {
"query": "Keywords/Search terms (text you put in LinkedIn search bar)",
"page": "Page number (1-100, each page contains 20 results)",
"sort_by": "Sort method: 'relevance' (Top match) or 'date_posted' (Latest)",
"author_job_title": "Filter by job title of author (e.g., CEO)",
"content_type": "Type of content post contains (photos, videos, liveVideos, collaborativeArticles, documents)",
"from_member": "URN of person who posted (comma-separated for multiple)",
"from_organization": "ID of organization who posted (comma-separated for multiple)",
"author_company": "ID of company author works for (comma-separated for multiple)",
"author_industry": "URN of industry author is connected with (comma-separated for multiple)",
"mentions_member": "URN of person mentioned in post (comma-separated for multiple)",
"mentions_organization": "ID of organization mentioned in post (comma-separated for multiple)"
}
},
"search_jobs": {
"route": "/search_jobs",
"method": "GET",
"name": "Search Jobs",
"description": "Searches LinkedIn jobs with various filtering options",
"payload": {
"query": "Job search keywords (e.g., Software developer)",
"page": "Page number",
"searchLocationId": "Location ID for job search (get from Suggestion location endpoint)",
"easyApply": "Filter for easy apply jobs (true or false)",
"experience": "Experience level required (1=Internship, 2=Entry level, 3=Associate, 4=Mid senior, 5=Director, 6=Executive, comma-separated)",
"jobType": "Job type (F=Full time, P=Part time, C=Contract, T=Temporary, V=Volunteer, I=Internship, O=Other, comma-separated)",
"postedAgo": "Time jobs were posted in seconds (e.g., 3600 for past hour)",
"workplaceType": "Workplace type (1=On-Site, 2=Remote, 3=Hybrid, comma-separated)",
"sortBy": "Sort method (DD=most recent, R=most relevant)",
"companyIdsList": "List of company IDs, comma-separated",
"industryIdsList": "List of industry IDs, comma-separated",
"functionIdsList": "List of function IDs, comma-separated",
"titleIdsList": "List of job title IDs, comma-separated",
"locationIdsList": "List of location IDs within specified searchLocationId country, comma-separated"
}
},
"search_people_with_filters": {
"route": "/search_people_with_filters",
"method": "POST",
"name": "Search People With Filters",
"description": "Searches LinkedIn profiles with detailed filtering options",
"payload": {
"keyword": "General search keyword",
"page": "Page number",
"title_free_text": "Job title to filter by (e.g., CEO)",
"company_free_text": "Company name to filter by",
"first_name": "First name of person",
"last_name": "Last name of person",
"current_company_list": "List of current companies (comma-separated IDs)",
"past_company_list": "List of past companies (comma-separated IDs)",
"location_list": "List of locations (comma-separated IDs)",
"language_list": "List of languages (comma-separated)",
"service_catagory_list": "List of service categories (comma-separated)",
"school_free_text": "School name to filter by",
"industry_list": "List of industries (comma-separated IDs)",
"school_list": "List of schools (comma-separated IDs)"
}
},
"search_company_with_filters": {
"route": "/search_company_with_filters",
"method": "POST",
"name": "Search Company With Filters",
"description": "Searches LinkedIn companies with detailed filtering options",
"payload": {
"keyword": "General search keyword",
"page": "Page number",
"company_size_list": "List of company sizes (comma-separated, e.g., A,D)",
"hasJobs": "Filter companies with jobs (true or false)",
"location_list": "List of location IDs (comma-separated)",
"industry_list": "List of industry IDs (comma-separated)"
}
}
}
base_url = "https://linkedin-data-scraper.p.rapidapi.com"
super().__init__(base_url, endpoints)
if __name__ == "__main__":
import os
os.environ["RAPID_API_KEY"] = ""
tool = LinkedInService()
result = tool.call_endpoint(
route="comments_from_recent_activity",
payload={"profile_url": "https://www.linkedin.com/in/adamcohenhillel/", "page": 1}
)
print(result)

View File

@ -0,0 +1,164 @@
import json
from agentpress.tool import Tool, ToolResult, openapi_schema, xml_schema
from agent.tools.api_services.LinkedInService import LinkedInService
class APIServicesTool(Tool):
"""Tool for making requests to various API services."""
def __init__(self):
super().__init__()
self.register_apis = {
"linkedin": LinkedInService()
}
@openapi_schema({
"type": "function",
"function": {
"name": "get_api_service_endpoints",
"description": "Get available endpoints for a specific API service",
"parameters": {
"type": "object",
"properties": {
"service_name": {
"type": "string",
"description": "The name of the API service (e.g., 'linkedin')"
}
},
"required": ["service_name"]
}
}
})
@xml_schema(
tag_name="get-api-service-endpoints",
mappings=[
{"param_name": "service_name", "node_type": "attribute", "path": "."}
],
example='''
<!--
The get-api-service-endpoints tool returns available endpoints for a specific API service.
Use this tool when you need to discover what endpoints are available.
-->
<!-- Example to get LinkedIn API endpoints -->
<get-api-service-endpoints service_name="linkedin">
</get-api-service-endpoints>
'''
)
async def get_api_service_endpoints(
self,
service_name: str
) -> ToolResult:
"""
Get available endpoints for a specific API service.
Parameters:
- service_name: The name of the API service (e.g., 'linkedin')
"""
try:
if not service_name:
return self.fail_response("API name is required.")
if service_name not in self.register_apis:
return self.fail_response(f"API '{service_name}' not found. Available APIs: {list(self.register_apis.keys())}")
endpoints = self.register_apis[service_name].get_endpoints()
return self.success_response(endpoints)
except Exception as e:
error_message = str(e)
simplified_message = f"Error getting API endpoints: {error_message[:200]}"
if len(error_message) > 200:
simplified_message += "..."
return self.fail_response(simplified_message)
@openapi_schema({
"type": "function",
"function": {
"name": "execute_api_call",
"description": "Execute a call to a specific API endpoint",
"parameters": {
"type": "object",
"properties": {
"service_name": {
"type": "string",
"description": "The name of the API service (e.g., 'linkedin')"
},
"route": {
"type": "string",
"description": "The key of the endpoint to call"
},
"payload": {
"type": "object",
"description": "The payload to send with the API call"
}
},
"required": ["service_name", "route"]
}
}
})
@xml_schema(
tag_name="execute-api-call",
mappings=[
{"param_name": "service_name", "node_type": "attribute", "path": "service_name"},
{"param_name": "route", "node_type": "attribute", "path": "route"},
{"param_name": "payload", "node_type": "content", "path": "."}
],
example='''
<!--
The execute-api-call tool makes a request to a specific API endpoint.
Use this tool when you need to call an API endpoint with specific parameters.
The route must be a valid endpoint key obtained from get-api-service-endpoints tool!!
-->
<!-- Example to call linkedIn service with the specific route person -->
<execute-api-call service_name="linkedin" route="person">
{"link": "https://www.linkedin.com/in/johndoe/"}
</execute-api-call>
'''
)
async def execute_api_call(
self,
service_name: str,
route: str,
payload: str # this actually a json string
) -> ToolResult:
"""
Execute a call to a specific API endpoint.
Parameters:
- service_name: The name of the API service (e.g., 'linkedin')
- route: The key of the endpoint to call
- payload: The payload to send with the API call
"""
try:
payload = json.loads(payload)
if not service_name:
return self.fail_response("service_name is required.")
if not route:
return self.fail_response("route is required.")
if service_name not in self.register_apis:
return self.fail_response(f"API '{service_name}' not found. Available APIs: {list(self.register_apis.keys())}")
api_service = self.register_apis[service_name]
if route == service_name:
return self.fail_response(f"route '{route}' is the same as service_name '{service_name}'. YOU FUCKING IDIOT!")
if route not in api_service.get_endpoints().keys():
return self.fail_response(f"Endpoint '{route}' not found in {service_name} API.")
result = api_service.call_endpoint(route, payload)
return self.success_response(result)
except Exception as e:
error_message = str(e)
print(error_message)
simplified_message = f"Error executing API call: {error_message[:200]}"
if len(error_message) > 200:
simplified_message += "..."
return self.fail_response(simplified_message)

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 629 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 972 KiB