Skip to main content
The Config module loads and exports all environment variables used throughout Junkie.

Environment Loading

Configuration is loaded from .env file using python-dotenv:
from dotenv import load_dotenv
load_dotenv()

Database Configuration

POSTGRES_URL

POSTGRES_URL
str
default:""
PostgreSQL connection URL for database operations
Example:
POSTGRES_URL=postgresql://user:pass@localhost:5432/junkie

Model and Provider Configuration

PROVIDER

PROVIDER
str
default:"groq"
AI model provider (“groq” or custom provider URL)

MODEL_NAME

MODEL_NAME
str
default:"openai/gpt-oss-120b"
Model identifier to use

CUSTOM_PROVIDER_API_KEY

CUSTOM_PROVIDER_API_KEY
str
default:"None"
API key for custom provider

GROQ_API_KEY

GROQ_API_KEY
str
default:""
Groq API key for Groq models

FIRECRAWL_API_KEY

FIRECRAWL_API_KEY
str
default:""
Firecrawl API key for web scraping capabilities

Agent Configuration

MODEL_TEMPERATURE

MODEL_TEMPERATURE
float
default:"0.3"
Model temperature for response randomness (0.0-1.0)

MODEL_TOP_P

MODEL_TOP_P
float
default:"0.9"
Top-p sampling parameter (0.0-1.0)

AGENT_HISTORY_RUNS

AGENT_HISTORY_RUNS
int
default:"1"
Number of previous conversation runs to include in context

AGENT_RETRIES

AGENT_RETRIES
int
default:"2"
Number of retry attempts for failed agent calls

DEBUG_MODE

DEBUG_MODE
bool
default:"false"
Enable debug mode for agents

DEBUG_LEVEL

DEBUG_LEVEL
int
default:"1"
Debug verbosity level (1-3)

MAX_AGENTS

MAX_AGENTS
int
default:"100"
Maximum number of agent teams to cache in memory

Tracing Configuration

TRACING_ENABLED

TRACING_ENABLED
bool
default:"false"
Enable Phoenix tracing for observability

PHOENIX_API_KEY

PHOENIX_API_KEY
str
default:"None"
Phoenix API key for tracing

PHOENIX_ENDPOINT

PHOENIX_ENDPOINT
str
Phoenix tracing endpoint URL

PHOENIX_PROJECT_NAME

PHOENIX_PROJECT_NAME
str
default:"junkie"
Phoenix project name for trace organization

MCP Configuration

MCP_URLS

MCP_URLS
str
default:""
Comma-separated list of MCP server URLs
Example:
MCP_URLS=https://mcp1.example.com,https://mcp2.example.com

Chat Context Configuration

CONTEXT_AGENT_MODEL

CONTEXT_AGENT_MODEL
str
default:"gemini-2.5-flash-lite"
Model to use for context/history analysis

CONTEXT_AGENT_MAX_MESSAGES

CONTEXT_AGENT_MAX_MESSAGES
int
default:"50000"
Maximum messages to analyze for context

TEAM_LEADER_CONTEXT_LIMIT

TEAM_LEADER_CONTEXT_LIMIT
int
default:"100"
Number of recent messages to include in team leader context

Usage Example

from core.config import (
    PROVIDER,
    MODEL_NAME,
    GROQ_API_KEY,
    MAX_AGENTS,
    POSTGRES_URL
)

print(f"Using provider: {PROVIDER}")
print(f"Model: {MODEL_NAME}")
print(f"Max agents: {MAX_AGENTS}")

Environment File Template

# Database
POSTGRES_URL=postgresql://user:pass@localhost:5432/junkie

# Model Provider
CUSTOM_PROVIDER=https://api.example.com/v1
CUSTOM_PROVIDER_API_KEY=your_api_key
CUSTOM_MODEL=gpt-5
GROQ_API_KEY=your_groq_key

# Agent Settings
MODEL_TEMPERATURE=0.3
MODEL_TOP_P=0.9
AGENT_HISTORY_RUNS=1
AGENT_RETRIES=2
MAX_AGENTS=100

# Debug
DEBUG_MODE=false
DEBUG_LEVEL=1

# Tracing
TRACING=false
PHOENIX_API_KEY=your_phoenix_key
PHOENIX_PROJECT_NAME=junkie

# MCP
MCP_URLS=https://mcp.example.com

# Context
CONTEXT_AGENT_MODEL=gemini-2.5-flash-lite
CONTEXT_AGENT_MAX_MESSAGES=50000
TEAM_LEADER_CONTEXT_LIMIT=100

# Optional
FIRECRAWL_API_KEY=your_firecrawl_key

Build docs developers (and LLMs) love