Skip to main content

Overview

LLM utilities provide helper functions for working with language model providers, including OpenAI, Anthropic, and OpenRouter. These utilities are used throughout the agent framework for model initialization and configuration.

Provider Functions

get_openai_provider

Creates an OpenAI provider instance for use with PydanticAI agents. Location: prediction_market_agent_tooling.tools.openai_utils
api_key
SecretStr
required
OpenAI API key from environment or APIKeys
base_url
str
Custom base URL for OpenAI API (optional)Use for:
  • OpenRouter: https://openrouter.ai/api/v1
  • Custom endpoints
  • Proxies
return
OpenAIProvider
Configured OpenAI provider instance for PydanticAI
from prediction_market_agent_tooling.tools.openai_utils import get_openai_provider
from prediction_market_agent.utils import APIKeys
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel

api_keys = APIKeys()

agent = Agent(
    OpenAIModel(
        "gpt-4o-2024-08-06",
        provider=get_openai_provider(api_key=api_keys.openai_api_key),
    )
)

Configuration Classes

APIKeys

Configuration class for managing API keys and credentials. Location: prediction_market_agent.utils

Properties

openai_api_key
SecretStr
OpenAI API key (raises error if not set)
openrouter_api_key
SecretStr
OpenRouter API key (raises error if not set)
anthropic_api_key
SecretStr
Anthropic API key (raises error if not set)
replicate_api_key
SecretStr
Replicate API key (raises error if not set)
tavily_api_key
SecretStr
Tavily search API key (raises error if not set)

Environment Variables

All keys are loaded from environment variables:
OPENAI_API_KEY=sk-...
OPENROUTER_API_KEY=sk-or-...
ANTHROPIC_API_KEY=sk-ant-...
REPLICATE_API_KEY=r8_...
TAVILY_API_KEY=tvly-...
from prediction_market_agent.utils import APIKeys

keys = APIKeys()

# Access keys (raises error if not set)
openai_key = keys.openai_api_key
tavily_key = keys.tavily_api_key

# Use with providers
provider = get_openai_provider(api_key=keys.openai_api_key)

DBKeys

Database configuration for caching and storage. Location: prediction_market_agent.utils
SQLALCHEMY_DB_URL
SecretStr | None
Database URL for SQLAlchemy (optional)
from prediction_market_agent.utils import DBKeys

db_keys = DBKeys()
if db_keys.SQLALCHEMY_DB_URL:
    print("Database configured")

Model Configuration

DEFAULT_OPENAI_MODEL

Default OpenAI model used throughout the agent framework. Location: prediction_market_agent.utils
DEFAULT_OPENAI_MODEL: KnownModelName = "openai:gpt-4o-2024-08-06"
This constant ensures consistent model usage across agents. Do not update to a worse or more expensive model without thorough testing.
from prediction_market_agent.utils import DEFAULT_OPENAI_MODEL
from pydantic_ai.models import infer_model

model = infer_model(DEFAULT_OPENAI_MODEL)

OPENROUTER_BASE_URL

Base URL for OpenRouter API. Location: prediction_market_agent.utils
OPENROUTER_BASE_URL = "https://openrouter.ai/api/v1"

Utility Functions

get_market_prompt

Generates standardized prompt for market prediction questions. Location: prediction_market_agent.utils
question
str
required
The market question to research
return
str
Formatted prompt for LLM
from prediction_market_agent.utils import get_market_prompt

question = "Will Bitcoin reach $100k by 2025?"
prompt = get_market_prompt(question)

print(prompt)
# Output:
# Research and report on the following question:
#
# Will Bitcoin reach $100k by 2025?
#
# Return ONLY a single world answer: 'Yes' or 'No', even if you are unsure. 
# If you are unsure, make your best guess.

parse_result_to_boolean

Converts LLM text response to boolean. Location: prediction_market_agent.utils
result
str
required
LLM response string (“Yes” or “No”)
return
bool
True for “Yes”, False for “No”
Raises error if result is not “Yes” or “No” (case-insensitive)
from prediction_market_agent.utils import parse_result_to_boolean

result = "Yes"
boolean_result = parse_result_to_boolean(result)  # True

parse_result_to_str

Converts boolean to standardized string format. Location: prediction_market_agent.utils
result
bool
required
Boolean value to convert
return
str
“Yes” for True, “No” for False
from prediction_market_agent.utils import parse_result_to_str

result = parse_result_to_str(True)   # "Yes"
result = parse_result_to_str(False)  # "No"

completion_str_to_json

Cleans and parses JSON from LLM completions. Location: prediction_market_agent.utils
completion
str
required
LLM completion string containing JSON (possibly with markdown code fences)
return
dict[str, Any]
Parsed JSON dictionary
Handles:
  • JSON wrapped in markdown code blocks
  • Extra whitespace
  • Text before/after JSON
from prediction_market_agent.utils import completion_str_to_json

completion = '''
```json
{
    "result": "YES",
    "reasoning": "Based on current trends..."
}

patch_sqlite3

Patches SQLite3 to use pysqlite3-binary in restricted environments. Location: prediction_market_agent.utils
Useful in environments like Streamlit Cloud where system SQLite cannot be updated and Chroma requires SQLite >= 3.35.0.
from prediction_market_agent.utils import patch_sqlite3

# Call before importing Chroma or other SQLite-dependent libraries
patch_sqlite3()

import chromadb

Provider Examples

OpenAI

from prediction_market_agent_tooling.tools.openai_utils import get_openai_provider
from prediction_market_agent.utils import APIKeys, DEFAULT_OPENAI_MODEL
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.settings import ModelSettings

api_keys = APIKeys()

model = OpenAIModel(
    DEFAULT_OPENAI_MODEL,
    provider=get_openai_provider(api_key=api_keys.openai_api_key),
)

agent = Agent(model, model_settings=ModelSettings(temperature=0.0))
result = agent.run_sync("What is 2+2?")

Anthropic

from prediction_market_agent.utils import APIKeys
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModel
from pydantic_ai.providers.anthropic import AnthropicProvider
from pydantic_ai.settings import ModelSettings

api_keys = APIKeys()

agent = Agent(
    AnthropicModel(
        "claude-3-5-sonnet-20241022",
        provider=AnthropicProvider(
            api_key=api_keys.anthropic_api_key.get_secret_value()
        ),
    ),
    model_settings=ModelSettings(temperature=0.7),
)

OpenRouter

from prediction_market_agent_tooling.tools.openai_utils import get_openai_provider
from prediction_market_agent.utils import APIKeys, OPENROUTER_BASE_URL
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel

api_keys = APIKeys()

# DeepSeek via OpenRouter
deepseek_agent = Agent(
    OpenAIModel(
        "deepseek/deepseek-chat",
        provider=get_openai_provider(
            api_key=api_keys.openrouter_api_key,
            base_url=OPENROUTER_BASE_URL,
        ),
    )
)

# Gemini via OpenRouter
gemini_agent = Agent(
    OpenAIModel(
        "google/gemini-2.0-flash-001",
        provider=get_openai_provider(
            api_key=api_keys.openrouter_api_key,
            base_url=OPENROUTER_BASE_URL,
        ),
    )
)

Model Settings

Temperature Guidelines

Research (0.7)

Use for:
  • Research agents
  • Generating search queries
  • Creative analysis
  • Exploring possibilities
ModelSettings(temperature=0.7)

Prediction (0.0)

Use for:
  • Final predictions
  • Probability estimates
  • Deterministic outputs
  • Consistent results
ModelSettings(temperature=0.0)

Best Practices

Key Management

  • Use environment variables for all keys
  • Never hardcode API keys
  • Use SecretStr for key storage
  • Validate keys on startup

Model Selection

  • Use DEFAULT_OPENAI_MODEL for consistency
  • Test thoroughly before changing defaults
  • Consider cost vs. performance tradeoffs
  • Document model-specific requirements

Provider Configuration

  • Always use get_openai_provider helper
  • Set appropriate base URLs for custom endpoints
  • Configure timeouts for production
  • Handle provider errors gracefully

Temperature Settings

  • 0.7 for research and creativity
  • 0.0 for predictions and deterministic tasks
  • 1.0 for O-series models (required)
  • Test different values for your use case

Error Handling

Common errors:
  • Missing API keys in environment
  • Invalid API keys
  • Rate limiting
  • Model not available
  • Invalid temperature for model
from prediction_market_agent.utils import APIKeys
from prediction_market_agent_tooling.tools.utils import check_not_none

try:
    keys = APIKeys()
    openai_key = keys.openai_api_key  # Raises if not set
except Exception as e:
    print(f"API key error: {e}")
    exit(1)

Dependencies

pip install pydantic-ai openai anthropic pydantic pydantic-settings

See Also

Build docs developers (and LLMs) love