Skip to main content
This guide will get you from zero to your first working query in minutes. We’ll set up a minimal configuration and run your first infrastructure query.

Prerequisites

Before starting, make sure you have:
  • Clanker installed (installation guide)
  • An OpenAI API key or Gemini API key
  • AWS CLI configured (for AWS queries) or kubectl configured (for Kubernetes queries)

Your first query

The fastest way to get started is with a simple query using environment variables:
1

Set your API key

Clanker works with multiple AI providers. Choose one:
export OPENAI_API_KEY="sk-..."
2

Run your first query

Try a simple test query:
clanker ask "what is the current time?"
Without a config file, Clanker defaults to OpenAI’s gpt-5 model.
3

Try an infrastructure query

If you have AWS configured, ask about your infrastructure:
clanker ask --aws "what ec2 instances are running?"
Or for Kubernetes:
clanker k8s ask "how many pods are running?"
For a better experience, create a configuration file:
1

Initialize config

Clanker can generate a template config file:
clanker config init
Or copy the example manually:
cp .clanker.example.yaml ~/.clanker.yaml
2

Edit the config

Open ~/.clanker.yaml and configure your AI provider:
ai:
  default_provider: gemini-api
  providers:
    gemini-api:
      model: gemini-2.5-flash
      api_key_env: GEMINI_API_KEY
3

Configure AWS (if using AWS features)

Add your AWS profiles to the config:
infra:
  default_provider: aws
  default_environment: dev
  
  aws:
    environments:
      dev:
        profile: your-aws-profile
        region: us-east-1
Make sure your AWS profile is configured:
aws configure --profile your-aws-profile
aws sts get-caller-identity --profile your-aws-profile

Common query patterns

Now that you’re set up, try these common queries:

AWS infrastructure

clanker ask "what lambda functions do we have?"
clanker ask "show me lambda functions with high error rates"

Kubernetes

clanker k8s ask "how many pods are running?"
clanker k8s ask "show me pods that are not running"
clanker k8s ask "which pods are using the most memory?"

GitHub

clanker ask --github "what pull requests are open?"
clanker ask --github "show me GitHub Actions workflow status"

Follow-up questions

Clanker maintains conversation context, so you can ask follow-up questions:
# Initial query
clanker ask "show me the nginx deployment"

# Follow-up (no need to repeat context)
clanker k8s ask "now show me its logs"
clanker k8s ask "how many replicas does it have?"

Using different AI profiles

If you have multiple AI providers configured, switch between them:
# Use OpenAI for this query
clanker ask --ai-profile openai "what lambdas do we have?"

# Use Gemini for this query
clanker ask --ai-profile gemini-api "show me ec2 instances"

Override AWS profiles

You can override the default AWS profile for specific queries:
# Use a different profile
clanker ask --profile production "what ec2 instances are running?"

# Force AWS context
clanker ask --aws --profile dev "list all lambda functions"

Debug mode

When things aren’t working as expected, enable debug mode:
clanker ask "what ec2 instances are running" --aws --debug
This shows:
  • Selected tools and operations
  • AWS CLI calls being made
  • Prompt sizes and LLM operations
  • Response generation details

Agent trace mode

For detailed coordinator and agent lifecycle logs:
clanker ask --agent-trace --profile dev "how can i create an additional lambda?"

View your configuration

Verify your configuration at any time:
clanker config show

Next steps

Configuration

Deep dive into all configuration options, AI providers, and multi-cloud setup

AWS commands

Learn about AWS-specific features and the maker mode for infrastructure changes

Kubernetes

Explore Kubernetes cluster management, deployments, and troubleshooting

Debugging

Debug mode and troubleshooting
Clanker is read-only by default. To make infrastructure changes, you need to use --maker mode, which generates reviewable plans before applying changes.

Build docs developers (and LLMs) love