Esprit CLI can be configured using environment variables. All configuration is tracked and persisted in ~/.esprit/cli-config.json.
Configuration Management
class Config :
"""Configuration Manager for Esprit."""
# LLM Configuration
esprit_llm = None
llm_api_key = None
llm_api_base = None
openai_api_base = None
litellm_base_url = None
ollama_api_base = None
esprit_reasoning_effort = "high"
esprit_llm_max_retries = "5"
esprit_memory_compressor_timeout = "30"
llm_timeout = "300"
# Tool & Feature Configuration
perplexity_api_key = None
esprit_disable_browser = "false"
# Runtime Configuration
esprit_image = "improdead/esprit-sandbox:latest"
esprit_docker_platform = None
esprit_runtime_backend = "docker"
esprit_sandbox_timeout = "60"
esprit_sandbox_execution_timeout = "120"
esprit_sandbox_connect_timeout = "10"
# Telemetry
esprit_telemetry = "1"
LLM Configuration
ESPRIT_LLM
Required : Model to use for scans
Esprit Cloud
OpenCode
OpenAI
Anthropic
Google
Antigravity
export ESPRIT_LLM = "esprit/default"
export ESPRIT_LLM = "esprit/kimi-k2.5" # Esprit Pro
export ESPRIT_LLM = "esprit/haiku" # Esprit Fast
export ESPRIT_LLM = "opencode/gpt-5.2-codex"
export ESPRIT_LLM = "opencode/claude-opus-4-6"
export ESPRIT_LLM = "opencode/gemini-3-pro"
export ESPRIT_LLM = "opencode/kimi-k2.5"
export ESPRIT_LLM = "openai/gpt-5.3-codex"
export ESPRIT_LLM = "openai/gpt-5.2"
export ESPRIT_LLM = "anthropic/claude-sonnet-4-5-20250514"
export ESPRIT_LLM = "anthropic/claude-opus-4-5-20251101"
export ESPRIT_LLM = "google/gemini-3-pro"
export ESPRIT_LLM = "google/gemini-3-flash"
export ESPRIT_LLM = "antigravity/claude-opus-4-6-thinking"
export ESPRIT_LLM = "antigravity/gemini-2.5-pro"
Environment variable takes precedence over ~/.esprit/config.json.
LLM_API_KEY
Optional : API key for the LLM provider
export LLM_API_KEY = "your-api-key-here"
Not needed for:
Local models (Ollama, LMStudio)
OAuth providers (Anthropic, OpenAI, Google with OAuth)
Esprit subscription
Public OpenCode models
LLM_API_BASE
Optional : Custom API base URL
# Ollama
export LLM_API_BASE = "http://localhost:11434"
# LMStudio
export LLM_API_BASE = "http://localhost:1234/v1"
# Custom OpenAI-compatible endpoint
export LLM_API_BASE = "https://api.custom.com/v1"
Alternative Base URLs
# OpenAI-specific
export OPENAI_API_BASE = "https://api.openai.com/v1"
# LiteLLM proxy
export LITELLM_BASE_URL = "http://localhost:4000"
# Ollama-specific
export OLLAMA_API_BASE = "http://localhost:11434"
Priority: LLM_API_BASE > OPENAI_API_BASE > LITELLM_BASE_URL > OLLAMA_API_BASE
ESPRIT_REASONING_EFFORT
Optional : Reasoning effort level (default: high)
export ESPRIT_REASONING_EFFORT = "high"
Options:
none - Minimal reasoning
minimal - Basic reasoning
low - Light reasoning
medium - Balanced reasoning
high - Deep reasoning (recommended)
xhigh - Maximum reasoning
LLM_TIMEOUT
Optional : LLM API timeout in seconds (default: 300)
export LLM_TIMEOUT = "300" # 5 minutes
ESPRIT_LLM_MAX_RETRIES
Optional : Maximum retries for failed LLM calls (default: 5)
export ESPRIT_LLM_MAX_RETRIES = "5"
ESPRIT_MEMORY_COMPRESSOR_TIMEOUT
Optional : Memory compression timeout in seconds (default: 30)
export ESPRIT_MEMORY_COMPRESSOR_TIMEOUT = "30"
PERPLEXITY_API_KEY
Optional : Perplexity API key for web search
export PERPLEXITY_API_KEY = "pplx-xxxxx"
Enables real-time web research during scans. Get an API key from perplexity.ai .
ESPRIT_DISABLE_BROWSER
Optional : Disable browser automation (default: false)
export ESPRIT_DISABLE_BROWSER = "true"
Runtime Configuration
ESPRIT_IMAGE
Required for Docker : Sandbox container image
export ESPRIT_IMAGE = "improdead/esprit-sandbox:latest"
Latest
Specific Version
Custom Registry
export ESPRIT_IMAGE = "improdead/esprit-sandbox:latest"
export ESPRIT_IMAGE = "improdead/esprit-sandbox:v1.2.3"
export ESPRIT_IMAGE = "myregistry.io/esprit-sandbox:custom"
Optional : Docker platform (auto-detected)
# Force AMD64 on Apple Silicon
export ESPRIT_DOCKER_PLATFORM = "linux/amd64"
# ARM64
export ESPRIT_DOCKER_PLATFORM = "linux/arm64"
Esprit automatically uses linux/amd64 on macOS and ARM hosts since the sandbox image only publishes AMD64 manifests.
ESPRIT_RUNTIME_BACKEND
Optional : Runtime backend (default: docker)
export ESPRIT_RUNTIME_BACKEND = "docker" # Local Docker
export ESPRIT_RUNTIME_BACKEND = "cloud" # Esprit Cloud
ESPRIT_SANDBOX_TIMEOUT
Optional : Sandbox startup timeout in seconds (default: 60)
export ESPRIT_SANDBOX_TIMEOUT = "60"
ESPRIT_SANDBOX_EXECUTION_TIMEOUT
Optional : Maximum execution time per tool in seconds (default: 120)
export ESPRIT_SANDBOX_EXECUTION_TIMEOUT = "120"
ESPRIT_SANDBOX_CONNECT_TIMEOUT
Optional : Connection timeout in seconds (default: 10)
export ESPRIT_SANDBOX_CONNECT_TIMEOUT = "10"
Telemetry
ESPRIT_TELEMETRY
Optional : Enable telemetry (default: 1)
export ESPRIT_TELEMETRY = "1" # Enable
export ESPRIT_TELEMETRY = "0" # Disable
Telemetry helps improve Esprit by collecting anonymous usage data. No sensitive information is collected.
Cloud Configuration
ESPRIT_API_URL
Optional : Esprit API base URL (default: https://esprit.dev/api/v1)
export ESPRIT_API_URL = "https://esprit.dev/api/v1"
OPENCODE_BASE_URL
Optional : OpenCode API base URL
export OPENCODE_BASE_URL = "https://opencode.ai/zen/v1"
Alternatively: OPENCODE_API_BASE
Configuration File
All environment variables are persisted in:
~ /.esprit/cli-config.json
Example:
{
"env" : {
"ESPRIT_LLM" : "opencode/gpt-5.2-codex" ,
"ESPRIT_REASONING_EFFORT" : "high" ,
"ESPRIT_IMAGE" : "improdead/esprit-sandbox:latest" ,
"ESPRIT_TELEMETRY" : "1" ,
"PERPLEXITY_API_KEY" : "pplx-xxxxx"
},
"ui" : {
"launchpad_theme" : "esprit" ,
"runtime_profile" : "cloud"
}
}
Loading Configuration
@ classmethod
def apply_saved ( cls , force : bool = False ) -> dict[ str , str ]:
"""Load and apply saved configuration."""
saved = cls .load()
if not isinstance (saved, dict ):
saved = {}
env_vars = saved.get( "env" , {})
if not isinstance (env_vars, dict ):
env_vars = {}
applied = {}
for var_name, var_value in env_vars.items():
if var_name in cls .tracked_vars() and (force or var_name not in os.environ):
os.environ[var_name] = var_value
applied[var_name] = var_value
return applied
Configuration is automatically loaded at startup. Environment variables take precedence over saved config.
Saving Configuration
@ classmethod
def save_current ( cls ) -> bool :
"""Save current environment to config file."""
saved = cls .load()
if not isinstance (saved, dict ):
saved = {}
existing = saved.get( "env" , {})
if not isinstance (existing, dict ):
existing = {}
merged = dict (existing)
for var_name in cls .tracked_vars():
value = os.getenv(var_name)
if value is None :
pass
elif value == "" :
merged.pop(var_name, None )
else :
merged[var_name] = value
saved[ "env" ] = merged
return cls .save(saved)
Example Configurations
OpenCode with Perplexity
export ESPRIT_LLM = "opencode/gpt-5.2-codex"
export LLM_API_KEY = "your-opencode-api-key"
export PERPLEXITY_API_KEY = "pplx-xxxxx"
export ESPRIT_REASONING_EFFORT = "high"
Esprit Cloud
export ESPRIT_LLM = "esprit/default"
export ESPRIT_RUNTIME_BACKEND = "cloud"
export ESPRIT_REASONING_EFFORT = "high"
Local Ollama
export ESPRIT_LLM = "ollama/llama3.1"
export LLM_API_BASE = "http://localhost:11434"
export ESPRIT_RUNTIME_BACKEND = "docker"
OpenAI with Docker
export ESPRIT_LLM = "openai/gpt-5.3-codex"
export ESPRIT_IMAGE = "improdead/esprit-sandbox:latest"
export ESPRIT_RUNTIME_BACKEND = "docker"
export ESPRIT_SANDBOX_TIMEOUT = "60"
Antigravity (Free)
export ESPRIT_LLM = "antigravity/claude-opus-4-6-thinking"
export ESPRIT_RUNTIME_BACKEND = "docker"
Troubleshooting
Configuration Not Loading
# Force reload
rm ~/.esprit/cli-config.json
esprit scan < targe t >
Environment Variable Precedence
@ classmethod
def get ( cls , name : str ) -> str | None :
env_name = name.upper()
default = getattr ( cls , name, None )
return os.getenv(env_name, default)
Precedence: Shell environment > cli-config.json > defaults
View Current Config
cat ~/.esprit/cli-config.json
Next Steps
Providers Learn about supported LLM providers
Runtime Modes Choose between Cloud and Docker runtime