Clanker provides comprehensive Kubernetes cluster management across EKS, GKE, and kubeadm. You can create clusters, deploy applications, and query resources using natural language.
Amazon EKS : Managed Kubernetes on AWS
Google GKE : Managed Kubernetes on GCP
Kubeadm : Self-managed clusters on EC2/GCE instances
Create your first EKS cluster
Create an Amazon EKS cluster with worker nodes: clanker k8s create eks my-cluster --nodes 2 --node-type t3.small
Clanker will:
Generate an execution plan
Show estimated costs and resources
Prompt for confirmation
Create the EKS cluster and node group
Update your kubeconfig automatically
Example output: === Kubernetes Cluster Plan ===
Cluster: my-cluster (EKS)
Region: us-east-1
Kubernetes Version: 1.29
Resources:
- EKS Cluster Control Plane
- Managed Node Group (2 nodes)
- Node Type: t3.small
- VPC with public/private subnets
- Security groups
- IAM roles (cluster, node)
Estimated monthly cost: ~$75
- EKS control plane: $73/month
- 2x t3.small nodes: ~$30/month
Do you want to create this cluster? [y/N]: y
[eks] creating cluster control plane...
[eks] creating managed node group...
[eks] waiting for nodes to be ready...
=== Cluster Created Successfully ===
Name: my-cluster
Status: ACTIVE
Endpoint: https://ABC123.eks.us-east-1.amazonaws.com
Version: 1.29
Nodes:
- ip-10-0-1-45.ec2.internal (Ready)
- ip-10-0-2-67.ec2.internal (Ready)
Kubeconfig: ~/.kube/config
Next steps:
kubectl get nodes
kubectl get pods -A
Use --plan to preview the cluster configuration without creating it: clanker k8s create eks my-cluster --nodes 3 --plan
Create a GKE cluster
Create a Google Kubernetes Engine cluster: clanker k8s create gke my-gke-cluster \
--gcp-project my-project \
--nodes 2 \
--node-type e2-standard-2 \
--gcp-region us-central1
GKE-specific options:
--preemptible: Use preemptible VMs for cost savings
--gcp-region: Specify GCP region (default: us-central1)
--version: Kubernetes version (default: GKE default)
Create a kubeadm cluster
Create a self-managed cluster using kubeadm on EC2: clanker k8s create kubeadm my-kubeadm-cluster \
--workers 2 \
--key-pair my-key \
--node-type t3.medium
Kubeadm clusters give you full control:
Manual CNI selection (Calico, Flannel)
Custom Kubernetes versions
Direct access via SSH
Lower cost than managed services
Kubeadm workflow: [k8s] checking SSH key configuration...
[k8s] SSH key 'my-key' exists in region us-east-1
[k8s] private key: ~/.ssh/my-key.pem
=== Kubernetes Cluster Plan ===
Cluster: my-kubeadm-cluster (kubeadm)
Control Plane: 1x t3.medium
Worker Nodes: 2x t3.medium
CNI: Calico
[k8s] launching control plane instance...
[k8s] installing kubeadm, kubectl, kubelet...
[k8s] initializing cluster with kubeadm...
[k8s] installing Calico CNI...
[k8s] launching worker nodes...
[k8s] joining workers to cluster...
=== Cluster Created Successfully ===
Control Plane:
master-node: 10.0.1.45 (Public: 54.234.123.45)
Workers:
worker-1: 10.0.1.67 (Public: 54.234.123.67)
worker-2: 10.0.1.89 (Public: 54.234.123.89)
Kubeconfig: ~/.kube/kubeadm-my-kubeadm-cluster.conf
SSH access:
ssh -i ~/.ssh/my-key.pem [email protected]
List clusters
View all clusters of a specific type: # List EKS clusters
clanker k8s list eks
# List GKE clusters
clanker k8s list gke --gcp-project my-project
# List kubeadm clusters
clanker k8s list kubeadm
Output: === EKS Clusters ===
Name: my-cluster
Status: ACTIVE
Region: us-east-1
Version: 1.29
Endpoint: https://ABC123.eks.us-east-1.amazonaws.com
Workers: 2
Name: prod-cluster
Status: ACTIVE
Region: us-east-1
Version: 1.28
Endpoint: https://DEF456.eks.us-east-1.amazonaws.com
Workers: 5
Deploy an application
Deploy a containerized application to your cluster: clanker k8s deploy nginx --name my-nginx --port 80 --replicas 3
This creates:
A Deployment with 3 replicas
A Service of type LoadBalancer
Automatic pod scheduling across nodes
Deployment output: === Deployment Plan ===
Name: my-nginx
Image: nginx
Replicas: 3
Port: 80
Namespace: default
Service Type: LoadBalancer
Do you want to deploy this application? [y/N]: y
[k8s] deploying application...
deployment.apps/my-nginx created
service/my-nginx created
=== Deployment Successful ===
Endpoint: http://a1b2c3d4e5f6-1234567890.us-east-1.elb.amazonaws.com
Next steps:
kubectl get pods -l app=my-nginx
kubectl logs -f deployment/my-nginx
kubectl scale deployment/my-nginx --replicas=5
Get cluster resources
Retrieve all cluster resources as JSON or YAML: # Get resources from specific cluster (JSON)
clanker k8s resources --cluster my-cluster
# Get resources in YAML format
clanker k8s resources --cluster my-cluster -o yaml
# Get resources from all EKS clusters
clanker k8s resources
This returns comprehensive cluster state:
Nodes (capacity, status, labels)
Pods (all namespaces)
Services (ClusterIP, LoadBalancer, NodePort)
PersistentVolumes and PersistentVolumeClaims
ConfigMaps and Secrets
Deployments, StatefulSets, DaemonSets
Monitoring and debugging
View pod logs
# Get logs from a pod
clanker k8s logs my-pod
# Logs from specific container
clanker k8s logs my-pod -c my-container
# Follow logs in real-time
clanker k8s logs my-pod -f
# Last 100 lines
clanker k8s logs my-pod --tail 100
# Logs from last 1 hour
clanker k8s logs my-pod --since 1h
# With timestamps
clanker k8s logs my-pod --timestamps
# All containers in pod
clanker k8s logs my-pod --all-containers
# Previous container (after restart)
clanker k8s logs my-pod -p
Resource metrics
# Node metrics
clanker k8s stats nodes
clanker k8s stats nodes --sort-by cpu
clanker k8s stats nodes -o json
# Pod metrics
clanker k8s stats pods
clanker k8s stats pods -n kube-system
clanker k8s stats pods -A # All namespaces
clanker k8s stats pods --sort-by memory
# Specific pod metrics
clanker k8s stats pod my-pod -n production
clanker k8s stats pod my-pod --containers
# Cluster-wide metrics
clanker k8s stats cluster
Example metrics output:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
ip-10-0-1-45 245m 12% 1456Mi 45%
ip-10-0-2-67 189m 9% 1123Mi 35%
Natural language queries
Use k8s ask for natural language Kubernetes queries:
# Basic queries
clanker k8s ask "how many pods are running"
clanker k8s ask "list all deployments and their replica counts"
clanker k8s ask "tell me the health of my cluster"
# With specific cluster
clanker k8s ask --cluster my-cluster "show me all pods"
# Resource metrics
clanker k8s ask "which pods are using the most memory"
clanker k8s ask "show node resource usage"
clanker k8s ask "top 10 pods by cpu usage"
# Troubleshooting
clanker k8s ask "show me recent logs from nginx"
clanker k8s ask "why is my pod crashing"
clanker k8s ask "get warning events from the cluster"
# Follow-up questions (maintains context)
clanker k8s ask "show me the nginx deployment"
clanker k8s ask "now show me its logs"
How k8s ask works:
LLM analyzes your question
Determines required kubectl operations
Executes operations in parallel
Synthesizes results into markdown response
Maintains conversation context per cluster
Cluster management
Get kubeconfig
# EKS
clanker k8s kubeconfig eks my-cluster
# GKE
clanker k8s kubeconfig gke my-gke-cluster --gcp-project my-project
# Kubeadm
clanker k8s kubeconfig kubeadm my-kubeadm-cluster
Delete clusters
# Delete EKS cluster
clanker k8s delete eks my-cluster
# Delete GKE cluster
clanker k8s delete gke my-gke-cluster --gcp-project my-project
# Delete kubeadm cluster (terminates all EC2 instances)
clanker k8s delete kubeadm my-kubeadm-cluster
Deleting a cluster is permanent and will remove all workloads, data, and configuration.
Advanced configurations
Custom EKS cluster
clanker k8s create eks production-cluster \
--nodes 5 \
--node-type m5.large \
--version 1.29 \
--profile prod-aws
GKE with preemptible nodes
clanker k8s create gke staging-cluster \
--gcp-project my-project \
--nodes 3 \
--node-type e2-standard-4 \
--preemptible \
--gcp-region us-west1
Multi-zone kubeadm
clanker k8s create kubeadm multi-zone-cluster \
--workers 3 \
--node-type t3.large \
--key-pair my-key \
--version 1.28
Best practices
Use managed services Prefer EKS/GKE for production workloads. They provide automatic upgrades, security patches, and HA control planes.
Right-size nodes Start with smaller node types (t3.medium) and scale up based on actual usage. Monitor with k8s stats.
Plan before creating Use --plan to review cluster configuration and costs before provisioning.
Namespace isolation Deploy applications to dedicated namespaces for better organization and resource quotas.
Troubleshooting
Common issues: Insufficient IAM permissions # Verify your AWS user has EKS permissions
aws iam get-user --profile my-profile
VPC quota exceeded # Check VPC limits
aws ec2 describe-account-attributes --attribute-names vpc-max-elastic-ips
Invalid instance type for region # List available instance types
aws ec2 describe-instance-type-offerings --location-type availability-zone
Cannot connect to cluster
If kubectl commands fail: # Verify kubeconfig is set
echo $KUBECONFIG
# Re-fetch kubeconfig
clanker k8s kubeconfig eks my-cluster
# Test connection
kubectl cluster-info
# Check AWS credentials
aws sts get-caller-identity --profile my-profile
Nodes not joining cluster
For kubeadm clusters: # SSH to master node
ssh -i ~/.ssh/my-key.pem ubuntu@ < master-i p >
# Check cluster status
kubectl get nodes
# View join token
sudo kubeadm token list
# SSH to worker and check logs
ssh -i ~/.ssh/my-key.pem ubuntu@ < worker-i p >
sudo journalctl -u kubelet -f
Next steps
Monitoring resources Set up comprehensive cluster monitoring
Kubernetes debugging Debug pods, services, and networking issues
Multi-environment Manage dev, staging, and prod clusters
Cost optimization Optimize Kubernetes costs with right-sizing