.env file in the project root.
Setup
Create your.env file from the example:
Authentication Options
Shannon supports multiple authentication methods. Choose ONE of the following:Option 1: Direct Anthropic (Default)
Use the Anthropic API directly with an API key or OAuth token.Anthropic API key for Claude accessGet your key: https://console.anthropic.com/Example:Required: Unless using OAuth, Bedrock, Vertex AI, or Router mode
OAuth token for Claude access (alternative to API key)Example:Note: Use either
ANTHROPIC_API_KEY or CLAUDE_CODE_OAUTH_TOKEN, not bothOption 2: Router Mode (Multi-Model)
Route requests throughclaude-code-router to use alternative providers.
Enable router mode by passing
ROUTER=true to the start commandOpenAI Provider
OpenAI API key for GPT model accessExample:Start command:
Default model routing configurationFormat:
provider,model-nameExamples:OpenRouter Provider
OpenRouter API key for multi-provider accessPurpose: Access models from multiple providers (Google, Anthropic, etc.) through a single APIExample:Available models: See https://openrouter.ai/models
Option 3: AWS Bedrock
Use Claude models through AWS Bedrock with bearer token authentication.Enable AWS Bedrock modeExample:Required when enabled:
AWS_REGIONAWS_BEARER_TOKEN_BEDROCKANTHROPIC_SMALL_MODEL(with Bedrock model ID)ANTHROPIC_MEDIUM_MODEL(with Bedrock model ID)ANTHROPIC_LARGE_MODEL(with Bedrock model ID)
AWS region for Bedrock API accessExample:Common regions:
us-east-1(N. Virginia)us-west-2(Oregon)eu-west-1(Ireland)ap-southeast-1(Singapore)
AWS Bedrock API bearer tokenGet your token: https://aws.amazon.com/blogs/machine-learning/accelerate-ai-development-with-amazon-bedrock-api-keys/Example:
Bedrock Model IDs
Bedrock requires region-specific model IDs:Model IDs vary by region. Check AWS Bedrock console for available models in your region.
Option 4: Google Vertex AI
Use Claude models through Google Cloud Vertex AI.Enable Google Vertex AI modeExample:Required when enabled:
CLOUD_ML_REGIONANTHROPIC_VERTEX_PROJECT_IDGOOGLE_APPLICATION_CREDENTIALSANTHROPIC_SMALL_MODEL(with Vertex model ID)ANTHROPIC_MEDIUM_MODEL(with Vertex model ID)ANTHROPIC_LARGE_MODEL(with Vertex model ID)
Google Cloud region for Vertex AIExample:Common regions:
us-east5(Columbus)us-central1(Iowa)europe-west1(Belgium)asia-southeast1(Singapore)
Google Cloud project IDExample:Find your project ID: GCP Console → Project selector
Path to GCP service account key JSON fileRequirements:Example:Error if missing:
- Service account must have
roles/aiplatform.userrole - Key file must be inside
./credentials/directory (for Docker mount)
Vertex AI Model IDs
Vertex AI uses different model ID format:Model Tier Configuration
Shannon uses three model tiers for different analysis phases. Override defaults with these variables:Small tier model for lightweight analysisDefault:
claude-haiku-4-5-20251001Use cases: Quick scans, tool output parsing, simple classificationExamples:Medium tier model for standard analysisDefault:
claude-sonnet-4-6Use cases: Vulnerability analysis, code review, recon analysisExamples:Large tier model for complex analysis and reportingDefault:
claude-opus-4-6Use cases: Deep code analysis, exploit development, executive reportingExamples:Model tier overrides are required when using Bedrock or Vertex AI. The default model IDs only work with direct Anthropic API.
Runtime Configuration
Maximum output tokens for Claude responsesRecommended: Purpose: Ensures complete tool outputs and analysis results aren’t truncated
64000 for larger tool outputsExample:Complete Configuration Examples
Direct Anthropic API (simplest)
Direct Anthropic API (simplest)
OpenAI with Router
OpenAI with Router
OpenRouter (Gemini via unified API)
OpenRouter (Gemini via unified API)
AWS Bedrock (us-east-1)
AWS Bedrock (us-east-1)
Google Vertex AI
Google Vertex AI
Available Models Reference
OpenAI Models
gpt-5.2- Latest GPT-5 modelgpt-5-mini- Smaller, faster GPT-5 variant
OpenRouter Models
google/gemini-3-flash-preview- Google Gemini 3 Flash- Many other models available at https://openrouter.ai/models
Anthropic Models (Direct API)
claude-haiku-4-5-20251001- Fast, lightweightclaude-sonnet-4-6- Balanced performanceclaude-opus-4-6- Most capable
Validation and Errors
Missing authentication
Missing authentication
Error:Solution: Add one of the authentication methods to
.envBedrock missing credentials
Bedrock missing credentials
Error:Solution: Set all required Bedrock variables in
.envVertex AI missing credentials
Vertex AI missing credentials
Error:Solution: Set all required Vertex AI variables in
.envVertex AI service account key missing
Vertex AI service account key missing
Error:Solution:
- Download service account JSON key from GCP Console
- Place in
./credentials/directory - Update
GOOGLE_APPLICATION_CREDENTIALSpath in.env
Router mode without provider
Router mode without provider
Warning:Solution: Add provider API key to
.env when using ROUTER=trueSecurity Best Practices
Service account permissions
Service account permissions
Google Cloud (Vertex AI):
- Minimum role:
roles/aiplatform.user - Don’t use overly permissive roles
- Rotate service account keys regularly
AWS Bedrock tokens
AWS Bedrock tokens
- Use bearer tokens with appropriate IAM policies
- Limit token scope to Bedrock API access only
- Monitor usage in AWS CloudWatch
File permissions
File permissions
Troubleshooting
Variables not loading
Variables not loading
Symptom: Error about missing API key even though Incorrect format:
.env existsCheck:.envfile is in project root (same directory asshannonscript)- No syntax errors in
.env(no spaces around=) - No quotes around values (unless value contains spaces)
Model tier not working
Model tier not working
Symptom: Using default models even though overrides are setCheck:
- Model ID format matches your provider (Bedrock vs Vertex vs Direct API)
- No typos in model names
- Model is available in your region/project
Router not starting
Router not starting
Symptom: Timeout or connection errors with
ROUTER=trueCheck:- Provider API key is set (
OPENAI_API_KEYorOPENROUTER_API_KEY) - Router container is running:
docker compose ps router - Router logs:
docker compose logs router