mirror of https://github.com/buster-so/buster.git
3.0 KiB
3.0 KiB
LiteLLM Library - Agent Guidance
Purpose & Role
The LiteLLM library provides a unified interface for interacting with various Large Language Model (LLM) providers. It handles API communication, message formatting, response parsing, and error handling for multiple LLM services, simplifying the integration of LLMs throughout the application.
Key Functionality
- Unified client interface for multiple LLM providers
- Message formatting and standardization
- Streaming support for real-time responses
- Error handling and retry logic
- Structured types for LLM requests and responses
- Agent message handling for agent-based workflows
Internal Organization
Directory Structure
src/
├── client.rs - LLM client implementation
├── types.rs - Types for LLM interaction
└── lib.rs - Public exports
Key Modules
client
: Implements the LLM client that communicates with provider APIstypes
: Defines structured types for requests, responses, and configurations
Usage Patterns
use litellm::{LiteLLMClient, Message, Role};
async fn example_llm_call() -> Result<String, anyhow::Error> {
// Create a client
let client = LiteLLMClient::new("YOUR_API_KEY").await?;
// Prepare messages
let messages = vec![
Message {
role: Role::System,
content: "You are a helpful assistant.".to_string(),
},
Message {
role: Role::User,
content: "What is the capital of France?".to_string(),
},
];
// Send request to LLM
let response = client.generate(messages, None).await?;
Ok(response.choices[0].message.content.clone())
}
Common Implementation Patterns
- Initialize the client once and reuse it
- Structure conversations as sequences of messages
- Use system messages to set context and behavior
- Handle streaming responses with callbacks
- Include proper error handling for API failures
- Set appropriate parameters for different use cases
Dependencies
-
Internal Dependencies:
- None - this is a foundational library that other libraries depend on
-
External Dependencies:
reqwest
: For making HTTP requests to LLM providersserde_json
: For serializing and deserializing JSONtokio
: For async runtime supportasync-trait
: For async trait implementationsfutures
andfutures-util
: For async stream processing
Code Navigation Tips
- Start with
lib.rs
to see what's exported client.rs
contains the main client implementationtypes.rs
defines the data structures for requests and responses- Look for provider-specific code in the client implementation
- Understand the message structure and how it's processed
Testing Guidelines
- Mock API responses using
mockito
for unit tests - Test different response formats and error conditions
- Validate request formatting for each provider
- Test streaming functionality with chunked responses
- Run tests with:
cargo test -p litellm
- Use environment variables for API keys in integration tests