Provider Trait
TheProvider trait is the unified interface for all LLM backends in OneClaw. All 6 providers (Anthropic, OpenAI, DeepSeek, Groq, Gemini, Ollama) implement this trait.
Trait Definition
Methods
id() -> &'static str
Returns the provider identifier: "anthropic", "openai", "deepseek", "groq", "google", or "ollama".
display_name() -> &str
Human-readable name for display: "Anthropic Claude", "OpenAI GPT", "Google Gemini", etc.
is_available() -> bool
Check if the provider is ready to serve requests:
- Cloud providers (Anthropic, OpenAI, etc.): Returns
trueif API key is configured - Local providers (Ollama): Performs actual health check with 5s timeout
chat(system, user_message) -> Result<ProviderResponse>
Send a single message with system prompt. Used for simple queries, alert generation, classification.
Parameters:
system: System instruction (empty string if none)user_message: User’s message text
Result<ProviderResponse> containing generated text, provider ID, and token usage.
chat_with_history(system, messages) -> Result<ProviderResponse>
Send a conversation with history. Used for multi-turn conversations and context-aware responses.
Parameters:
system: System instruction (empty string if none)messages: Array ofChatMessagewith conversation history
Result<ProviderResponse> with generated response.
Supporting Types
ChatMessage
Represents a message in a conversation:MessageRole
Role of a participant in a conversation:ProviderResponse
Response from an LLM provider:TokenUsage
Token usage statistics:ProviderConfig
Configuration for a provider instance:EmbeddingProvider Trait
TheEmbeddingProvider trait is the interface for generating text embeddings for vector memory.
Trait Definition
Methods
id() -> &str
Provider identifier: "ollama" or "openai".
model_name() -> &str
Model name: "nomic-embed-text", "text-embedding-3-small", etc.
model_id() -> String
Full model identifier for storage: "ollama:nomic-embed-text", "openai:text-embedding-3-small".
embed(text) -> Result<Embedding>
Generate embedding for a single text.
Parameters:
text: Text to embed
Result<Embedding> containing the vector representation.
embed_batch(texts) -> Result<Vec<Embedding>>
Generate embeddings for multiple texts in one request (batch operation).
Parameters:
texts: Array of text strings to embed
Result<Vec<Embedding>> with embeddings in same order as input.
Note: Default implementation calls embed() in a loop. Providers with native batch support can override.
dimensions() -> usize
Expected embedding dimensions for the configured model:
nomic-embed-text: 768text-embedding-3-small: 1536text-embedding-3-large: 3072
is_available() -> bool
Check if the embedding provider is reachable:
- Ollama: Checks
GET /api/tagsendpoint - OpenAI: Checks if API key is configured
Design Principles
Sync-Friendly
All methods are synchronous (returnResult, not Stream). Designed for edge/IoT simplicity. Future v2.0 may add streaming.
Simple
Only 2 chat methods: single message + history. No complex abstractions.No Lock-In
OpenAI-compatible API format as common denominator. Easy to swap providers via config.Object-Safe
All traits are object-safe, allowingBox<dyn Provider> and Box<dyn EmbeddingProvider> for dynamic dispatch.