Skip to main content
Longshot is configured entirely through environment variables in a .env file.

Setup

Create a .env file in your project directory:
cp .env.example .env
Then edit the file with your configuration values.

Required Configuration

These variables must be set for Longshot to run:

LLM Provider

LLM_BASE_URL
string
required
Base URL for an OpenAI-compatible LLM API.Examples:
  • OpenAI: https://api.openai.com/v1
  • Anthropic (via proxy): https://api.anthropic.com/v1
  • Azure OpenAI: https://YOUR_RESOURCE.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT
  • Local (Ollama): http://localhost:11434/v1
LLM_API_KEY
string
required
API key for the LLM provider.
LLM_API_KEY=sk-your-api-key-here

Git Configuration

GIT_REPO_URL
string
required
URL of the target repository workers will clone and commit to.
GIT_REPO_URL=https://github.com/your-org/your-repo.git
GIT_TOKEN
string
required
GitHub Personal Access Token with push access to the target repository.Required permissions: repo (full control of private repositories)
GIT_TOKEN=ghp_your-github-token-here

LLM Configuration

Basic Settings

LLM_MODEL
string
default:"gpt-4o"
Model name to pass to the API. Any model accessible via your LLM_BASE_URL works.Examples: gpt-5.3, gpt-4o, claude-opus-4-20250514, claude-sonnet-4-20250514
LLM_MAX_TOKENS
number
default:"65536"
Maximum tokens for LLM responses.
LLM_TEMPERATURE
number
default:"0.7"
Sampling temperature for the LLM.
  • 0.0 = deterministic
  • 1.0 = creative

Advanced LLM Settings

LLM_ENDPOINTS
JSON array
JSON array of endpoints for load balancing across multiple providers. Overrides LLM_BASE_URL and LLM_API_KEY when set.
LLM_ENDPOINTS=[{"baseUrl": "https://api.openai.com/v1", "apiKey": "sk-..."}, {"baseUrl": "https://api.anthropic.com/v1", "apiKey": "sk-ant-..."}]
LLM_TIMEOUT_MS
number
Timeout for individual LLM requests in milliseconds. Unset = no timeout.
LLM_READINESS_TIMEOUT_MS
number
default:"120000"
How long to wait for LLM endpoint readiness on startup (milliseconds).

Worker Configuration

MAX_WORKERS
number
default:"50"
Maximum number of parallel workers that can run simultaneously.
Higher values increase throughput but also increase LLM API costs and Modal compute costs.
WORKER_TIMEOUT
number
default:"1800"
Worker timeout in seconds (default: 30 minutes).Workers exceeding this timeout will be terminated and marked as failed.

Sandbox Configuration

Longshot uses Modal for cloud sandboxes. Configure sandbox resources:
SANDBOX_CPU_CORES
number
default:"4"
CPU cores per Modal sandbox.
SANDBOX_MEMORY_MB
number
default:"8192"
Memory per Modal sandbox in megabytes (default: 8GB).
SANDBOX_IDLE_TIMEOUT
number
default:"300"
Idle timeout before sandbox is terminated (seconds).
SANDBOX_IMAGE_TAG
string
default:"latest"
Docker image tag for sandboxes.

Git Configuration

GIT_MAIN_BRANCH
string
default:"main"
Primary branch name in the target repository.
GIT_BRANCH_PREFIX
string
default:"worker/"
Prefix for worker branches. Workers create branches like worker/task-1.
GIT_COMMIT_NAME
string
default:"Longshot Bot"
Git identity name for commits made by Longshot.
GIT_COMMIT_EMAIL
string
Git identity email for commits made by Longshot.

Orchestrator Configuration

MERGE_STRATEGY
string
default:"rebase"
Merge strategy for incorporating worker changes.Options:
  • fast-forward - Fast-forward only (fails if not possible)
  • rebase - Rebase worker branch onto main
  • merge-commit - Create merge commit
HEALTH_CHECK_INTERVAL
number
default:"10"
Reconciler health check interval in seconds.The reconciler monitors build health and spawns fix tasks when issues are detected.
FINALIZATION_ENABLED
boolean
default:"true"
Run build/test sweep after all tasks complete.When enabled, Longshot runs a final verification pass and fixes any remaining issues.
FINALIZATION_MAX_ATTEMPTS
number
default:"3"
Maximum reconciler fix attempts during finalization.
FINALIZATION_SWEEP_TIMEOUT_MS
number
default:"120000"
Timeout for finalization sweep in milliseconds.

Local Development

TARGET_REPO_PATH
string
default:"./target-repo"
Path to the local clone of the target repository.
PYTHON_PATH
string
default:"python3"
Python executable path.

Logging

LOG_LEVEL
string
default:"info"
Log level for output.Options: debug, info, warn, error
Use --debug CLI flag instead of setting this manually.
PORT
number
default:"8787"
MCP server port.

Example Configuration

Here’s a complete example .env file:
# LLM Provider
LLM_BASE_URL=https://api.openai.com/v1
LLM_API_KEY=sk-proj-...
LLM_MODEL=gpt-4o
LLM_MAX_TOKENS=65536
LLM_TEMPERATURE=0.7

# Git
GIT_REPO_URL=https://github.com/acme/web-app.git
GIT_TOKEN=ghp_...
GIT_MAIN_BRANCH=main
GIT_BRANCH_PREFIX=longshot/
GIT_COMMIT_NAME=Longshot AI
GIT_COMMIT_EMAIL=[email protected]

# Workers
MAX_WORKERS=30
WORKER_TIMEOUT=1800

# Sandboxes
SANDBOX_CPU_CORES=4
SANDBOX_MEMORY_MB=8192
SANDBOX_IDLE_TIMEOUT=300

# Orchestrator
MERGE_STRATEGY=rebase
HEALTH_CHECK_INTERVAL=10
FINALIZATION_ENABLED=true
FINALIZATION_MAX_ATTEMPTS=3

# Logging
LOG_LEVEL=info

Next Steps

First Project

Build your first project with Longshot

Architecture

Understand how Longshot works under the hood

Build docs developers (and LLMs) love