Skip to main content
Clanker provides comprehensive debugging capabilities to help you understand what’s happening under the hood. Use debug flags to see tool selection, AWS CLI calls, prompt sizes, agent lifecycle, and more.

Debug flags

Global debug mode

Enable debug output for all commands:
clanker ask "what ec2 instances are running" --debug
Or set in config:
.clanker.yaml
debug: true
When enabled, you’ll see:
  • Config file location
  • AWS profile and region resolution
  • Tool selection decisions
  • AWS CLI command invocations
  • HTTP request/response details
  • Prompt token counts
  • LLM provider and model
See cmd/root.go:35 for the global debug flag.

Agent trace mode

Enable detailed agent lifecycle logging:
clanker ask "show lambda errors" --agent-trace
Or set in config:
.clanker.yaml
agent:
  trace: true
Agent trace output includes:
  • Decision tree analysis results
  • Applicable nodes and priorities
  • Execution order groups
  • Agent start/completion events
  • Dependency satisfaction checks
  • Operation execution details
  • Aggregation statistics
Example output:
🌳 Decision tree analysis: 3 applicable nodes found
  📊 Node: Lambda Error Analysis (priority: 9, agents: [log, infrastructure])
  📊 Node: Service Health Check (priority: 7, agents: [metrics])
  📊 Node: General Investigation (priority: 5, agents: [k8s])
📊 Executing order group 1 with 3 agents
  ✨ Started log agent (ID: log-1234) with dependencies
  ✨ Started metrics agent (ID: metrics-5678) with dependencies
  ✨ Started k8s agent (ID: k8s-9012) with dependencies
🤖 Agent log-1234 (log) executing 2 operations
✅ Agent log-1234 completed operation: investigate_service_logs
✅ Agent log completed, provided data: [logs error_patterns log_metrics]
✅ Order group 1 completed
🎉 All 3 agents completed (3 successful, 0 failed)
See internal/agent/coordinator/coordinator.go:17 for the trace check.

Component-specific debugging

AWS operations

When debug mode is enabled, AWS operations show:
🔧 AWS Operation: list_lambda_functions
📋 Parameters: {}
💻 Executing AWS CLI: aws lambda list-functions --output json --region us-east-1
✅ AWS operation completed (1.2s)
For parallel operations:
🚀 Starting 5 parallel operations with concurrency limit: 3
  ✨ Starting operation: describe_ec2_instances
  ✨ Starting operation: list_lambda_functions
  ✨ Starting operation: describe_ecs_services
⏳ Waiting for parallel operations (5 total)...
✅ Parallel operations completed: 5 successful, 0 failed (3.4s)
See internal/aws/llm.go:93 and internal/aws/parallel.go:19.

Backend API calls

When using backend credential storage, debug mode shows HTTP requests:
[backend] GET /api/v1/cli/credentials/aws
[backend] Using AWS credentials from backend
See internal/backend/client.go:73.

Routing decisions

See which agent handles your query:
Keyword inference: AWS=true, GitHub=false, Terraform=false, K8s=false, GCP=false
[routing] Ambiguous query detected, using LLM for classification...
LLM override: AWS=true, K8s=false, GCP=false, Azure=false, Cloudflare=false
For programmatic routing inspection:
clanker ask "show pod status" --route-only
Output:
{
  "agent": "k8s",
  "reason": "Query mentions kubernetes resources (pod)"
}
See cmd/ask.go:105.

Kubernetes operations

K8s agent debugging shows:
[k8s] Using cluster: production (type: eks)
[k8s] Updating kubeconfig for EKS cluster...
[k8s] Executing kubectl: get pods --all-namespaces -o json
[k8s] Found 47 pods across 8 namespaces
See internal/k8s/llm.go:40.

IAM operations

IAM agent debugging includes:
[iam] Analyzing role: arn:aws:iam::123456789012:role/LambdaExecutionRole
[iam] Found 3 attached policies
[iam] Checking trust relationships...
See internal/iam/llm.go:35.

Debug output examples

Full debug trace

clanker ask "list s3 buckets" --aws --debug
Using config file: /Users/you/.clanker.yaml
🤖 Agent starting investigation of query: list s3 buckets
🧠 Semantic Analysis: Intent=query (85.0% confidence), Urgency=low, Services=[s3]
🎯 Maximum investigation steps: 3
🌳 Decision tree analysis: 2 applicable nodes found
  📊 Node: S3 Operations (priority: 8, agents: [infrastructure])
  📊 Node: General AWS Query (priority: 5, agents: [log])
📊 Executing order group 2 with 1 agents
  ✨ Started infrastructure agent (ID: infrastructure-abc123) with dependencies
🤖 Agent infrastructure-abc123 (infrastructure) executing 1 operations
🔧 AWS Operation: list_s3_buckets
💻 Executing AWS CLI: aws s3api list-buckets --output json
✅ AWS operation completed (0.8s)
✅ Agent infrastructure-abc123 completed operation: list_s3_buckets
✅ Agent infrastructure completed, provided data: [service_config deployment_status resource_health]
✅ Order group 2 completed
🎉 All 1 agents completed (1 successful, 0 failed)

🤖 Creating LLM request with provider: gemini-api, model: gemini-2.5-flash
📊 Context size: 1,247 tokens
💬 User prompt size: 18 tokens

Your S3 buckets:

1. **prod-data-bucket** (Created: 2024-01-15)
2. **staging-assets** (Created: 2024-02-20)
3. **logs-archive** (Created: 2023-11-05)

Maker plan generation

clanker ask "create a lambda function" --maker --debug
[maker] provider=aws (default)
🤖 Generating maker plan with provider: openai, model: gpt-5
📊 Plan prompt size: 3,421 tokens
💬 Sending plan generation request...
✅ Plan generated successfully (2.1s)
📋 Plan validation: version=1, commands=4, provider=aws

Credential resolution

clanker ask "test" --gcp --debug
[backend] GET /api/v1/cli/credentials/gcp
[backend] Using GCP credentials from backend
Successfully created GCP client with project: my-project-123

Common debug scenarios

Why isn’t my query using the right agent?

Check routing with --route-only:
clanker ask "check cloudflare dns" --route-only
If the wrong agent is selected, use an explicit flag:
clanker ask "check cloudflare dns" --cloudflare --debug

Why are AWS operations failing?

Enable debug mode to see the exact CLI commands:
clanker ask "list ec2 instances" --aws --debug 2>&1 | grep "aws "
Example output:
💻 Executing AWS CLI: aws ec2 describe-instances --region us-east-1 --output json
Test the command directly:
aws ec2 describe-instances --region us-east-1 --output json --profile your-profile

Which AI provider is being used?

Debug mode shows provider resolution:
🤖 Creating LLM request with provider: gemini-api, model: gemini-2.5-flash
Override with flags:
clanker ask "test" --ai-profile openai --openai-key "$OPENAI_API_KEY" --debug

Are my credentials being loaded from backend?

Look for backend messages:
[backend] GET /api/v1/cli/credentials/aws
[backend] Using AWS credentials from backend
If missing:
[backend] No AWS credentials available (unauthorized: invalid API key), falling back to local
Verify your API key:
clanker credentials list --api-key "$CLANKER_BACKEND_API_KEY" --debug

Why is agent trace not showing?

Agent trace requires explicit flag or config:
# With flag
clanker ask "test" --agent-trace

# Or in config
echo "agent:\n  trace: true" >> ~/.clanker.yaml
Agent trace is separate from --debug. You typically want both:
clanker ask "show errors" --debug --agent-trace

Debugging configuration

Verify config loading

clanker ask "test" --debug 2>&1 | head -1
Expected:
Using config file: /Users/you/.clanker.yaml
If not found:
cp .clanker.example.yaml ~/.clanker.yaml

Check profile resolution

clanker profiles
Output:
Available AWS Profiles (default: dev):

  dev (default)
    Region: us-east-1
    Description: Development environment

  prod
    Region: us-east-1
    Description: Production environment
See cmd/profiles.go:10.

Inspect AI provider config

grep -A 5 "ai:" ~/.clanker.yaml
Example:
ai:
  default_provider: gemini-api
  providers:
    gemini-api:
      model: gemini-2.5-flash
      api_key_env: GEMINI_API_KEY

Performance debugging

Measure operation timing

Debug mode includes operation duration:
✅ AWS operation completed (1.2s)
✅ Parallel operations completed: 5 successful, 0 failed (3.4s)
🎉 All 3 agents completed (3 successful, 0 failed)

Check prompt token counts

📊 Context size: 1,247 tokens
💬 User prompt size: 18 tokens
Large contexts slow down LLM responses. Consider scoping your query:
# Instead of:
clanker ask "show everything" --discovery

# Use:
clanker ask "show lambda errors"

Monitor parallel execution

With --agent-trace, see how many agents run concurrently:
📊 Executing order group 1 with 3 agents
  ✨ Started log agent (ID: log-1234) with dependencies
  ✨ Started metrics agent (ID: metrics-5678) with dependencies
  ✨ Started k8s agent (ID: k8s-9012) with dependencies
All three agents in order group 1 execute in parallel, reducing total time.

Environment variables

These environment variables affect debug output:
VariableEffect
CLANKER_DEBUG=1Same as --debug flag
AWS_PROFILEOverride default AWS profile
AWS_REGIONOverride default AWS region
GEMINI_API_KEYGemini API key (shown as *** in debug output)
OPENAI_API_KEYOpenAI API key (shown as *** in debug output)
CLANKER_BACKEND_API_KEYBackend API key for credential storage
Debug mode never prints full API keys or credentials.

Troubleshooting tips

Check if output is being buffered. Force flush:
clanker ask "test" --debug 2>&1 | cat
Or check for errors in stderr:
clanker ask "test" 2>&1
Debug mode shows the exact command. Test it manually:
clanker ask "list ec2" --debug 2>&1 | grep "Executing AWS CLI"
# Copy the command and run it
aws ec2 describe-instances --region us-east-1 --profile dev
Common issues:
  • Wrong profile or region
  • Expired credentials
  • Missing IAM permissions
Use --agent-trace to see decision tree evaluation:
clanker ask "your query" --agent-trace
Look for “Decision tree analysis: 0 applicable nodes” — this means no patterns matched. Try:
  • More specific keywords (“lambda errors” vs “show me stuff”)
  • Explicit flags (--aws, --cloudflare, etc.)
Check operation timing in debug output:
✅ AWS operation completed (15.2s)  ← Slow!
Possible causes:
  • Large AWS resource counts (many EC2s, logs, etc.)
  • Slow AWS API responses
  • Network latency
Optimize with scoped queries:
# Instead of:
clanker ask "show all resources" --discovery

# Use:
clanker ask "show lambda functions in production"

Debug log locations

Clanker writes debug output to stderr, normal responses to stdout:
# Save debug logs
clanker ask "test" --debug 2> debug.log

# Save response only
clanker ask "test" > response.txt

# Save both
clanker ask "test" --debug > response.txt 2> debug.log

Agent architecture

How agents coordinate and execute

Backend API

Debug credential loading from backend

Configuration

Config file structure and defaults

CLI reference

Command flags and options

Build docs developers (and LLMs) love