Ask natural language questions about your Kubernetes cluster using AI. The AI analyzes your question, executes the necessary kubectl operations, and provides a comprehensive markdown-formatted response.
Usage
clanker k8s ask [question] [flags]
Arguments
Natural language question about your Kubernetes cluster
Flags
Kubernetes cluster name (EKS or GKE cluster name)
AWS profile for EKS clusters
Path to kubeconfig file (default: ~/.kube/config)
kubectl context to use (overrides —cluster)
Default namespace for queries (default: all namespaces)
AI profile to use for LLM queries
Use GKE cluster instead of EKS
GCP project ID for GKE clusters
GCP region for GKE clusters
How it works
The k8s ask command uses a three-stage AI pipeline:
- Analysis: The AI analyzes your question and determines what kubectl operations are needed
- Execution: The necessary kubectl commands are executed against your cluster
- Response: The AI synthesizes the results into a comprehensive, markdown-formatted answer
Conversation history is maintained per cluster, allowing for follow-up questions that reference previous context.
Examples
clanker k8s ask "how many pods are running"
## Running Pods
Your cluster currently has **12 pods** running across all namespaces:
- **default**: 3 pods
- **kube-system**: 8 pods
- **monitoring**: 1 pod
All pods are in a healthy state.
Query specific resources
clanker k8s ask --cluster test-cluster --profile myaws "show me all deployments"
clanker k8s ask "which pods are using the most memory"
## Top Memory Consumers
Here are the pods using the most memory in your cluster:
1. **redis-master-0** (default): 512Mi
2. **postgres-primary-0** (database): 384Mi
3. **nginx-ingress-controller** (kube-system): 256Mi
The redis-master pod is consuming the most memory at 512Mi.
Troubleshooting
clanker k8s ask "why is my pod crashing"
## Pod Crash Analysis
I found a pod in CrashLoopBackOff state:
**Pod**: my-app-abc123-xyz
**Namespace**: default
**Restarts**: 5
### Recent Logs
Health checks
clanker k8s ask "tell me the health of my cluster"
## Cluster Health Report
### Overall Status: Healthy ✓
**Nodes**: 3/3 Ready
- ip-10-0-1-100: Ready (CPU: 12%, Memory: 38%)
- ip-10-0-1-101: Ready (CPU: 9%, Memory: 29%)
- ip-10-0-1-102: Ready (CPU: 15%, Memory: 42%)
**Pods**: 45/45 Running
- 0 Pending
- 0 Failed
- 0 CrashLoopBackOff
**System Components**:
- CoreDNS: Healthy
- kube-proxy: Running on all nodes
- CNI: Operational
Your cluster is operating normally with no issues detected.
GKE cluster queries
clanker k8s ask --gcp --gcp-project my-project --cluster my-gke-cluster "show me all pods"
Follow-up questions
The conversation history allows natural follow-ups:
clanker k8s ask "how many nodes are in the cluster"
# Response: "Your cluster has 3 nodes..."
clanker k8s ask "what about their CPU usage"
# Response references previous context about the 3 nodes
Error investigation
clanker k8s ask "give me error logs for nginx pod"
Resource recommendations
clanker k8s ask "are any pods over-utilizing resources"
The AI maintains conversation history per cluster, so you can ask follow-up questions that reference previous queries. For example, after asking “show me all pods”, you can ask “which one is using the most CPU” without re-specifying the context.
The k8s ask command requires:
- Valid kubeconfig or access to the specified cluster
- AI provider configured in your Clanker settings
- For EKS: AWS credentials with appropriate permissions
- For GKE: GCP credentials with appropriate permissions