Overview
LLM utilities provide helper functions for working with language model providers, including OpenAI, Anthropic, and OpenRouter. These utilities are used throughout the agent framework for model initialization and configuration.Provider Functions
get_openai_provider
Creates an OpenAI provider instance for use with PydanticAI agents. Location:prediction_market_agent_tooling.tools.openai_utils
OpenAI API key from environment or APIKeys
Custom base URL for OpenAI API (optional)Use for:
- OpenRouter:
https://openrouter.ai/api/v1 - Custom endpoints
- Proxies
Configured OpenAI provider instance for PydanticAI
Configuration Classes
APIKeys
Configuration class for managing API keys and credentials. Location:prediction_market_agent.utils
Properties
OpenAI API key (raises error if not set)
OpenRouter API key (raises error if not set)
Anthropic API key (raises error if not set)
Replicate API key (raises error if not set)
Tavily search API key (raises error if not set)
Environment Variables
All keys are loaded from environment variables:DBKeys
Database configuration for caching and storage. Location:prediction_market_agent.utils
Database URL for SQLAlchemy (optional)
Model Configuration
DEFAULT_OPENAI_MODEL
Default OpenAI model used throughout the agent framework. Location:prediction_market_agent.utils
This constant ensures consistent model usage across agents. Do not update to a worse or more expensive model without thorough testing.
OPENROUTER_BASE_URL
Base URL for OpenRouter API. Location:prediction_market_agent.utils
Utility Functions
get_market_prompt
Generates standardized prompt for market prediction questions. Location:prediction_market_agent.utils
The market question to research
Formatted prompt for LLM
parse_result_to_boolean
Converts LLM text response to boolean. Location:prediction_market_agent.utils
LLM response string (“Yes” or “No”)
True for “Yes”, False for “No”parse_result_to_str
Converts boolean to standardized string format. Location:prediction_market_agent.utils
Boolean value to convert
“Yes” for
True, “No” for Falsecompletion_str_to_json
Cleans and parses JSON from LLM completions. Location:prediction_market_agent.utils
LLM completion string containing JSON (possibly with markdown code fences)
Parsed JSON dictionary
- JSON wrapped in markdown code blocks
- Extra whitespace
- Text before/after JSON
patch_sqlite3
Patches SQLite3 to use pysqlite3-binary in restricted environments. Location:prediction_market_agent.utils
Useful in environments like Streamlit Cloud where system SQLite cannot be updated and Chroma requires SQLite >= 3.35.0.
Provider Examples
OpenAI
Anthropic
OpenRouter
Model Settings
Temperature Guidelines
Research (0.7)
Use for:
- Research agents
- Generating search queries
- Creative analysis
- Exploring possibilities
Prediction (0.0)
Use for:
- Final predictions
- Probability estimates
- Deterministic outputs
- Consistent results
Best Practices
Key Management
- Use environment variables for all keys
- Never hardcode API keys
- Use
SecretStrfor key storage - Validate keys on startup
Model Selection
- Use
DEFAULT_OPENAI_MODELfor consistency - Test thoroughly before changing defaults
- Consider cost vs. performance tradeoffs
- Document model-specific requirements
Provider Configuration
- Always use
get_openai_providerhelper - Set appropriate base URLs for custom endpoints
- Configure timeouts for production
- Handle provider errors gracefully
Temperature Settings
- 0.7 for research and creativity
- 0.0 for predictions and deterministic tasks
- 1.0 for O-series models (required)
- Test different values for your use case
Error Handling
Dependencies
See Also
- Prophet Agent API - Uses these utilities extensively
- Search API - Requires API keys
- Web Scraping API - Uses OpenAI for summarization
- Configuration Guide - Environment setup