Skip to main content
This guide walks you through deploying an EKS cluster with EKS Auto Mode — the simplest compute configuration, where AWS manages node provisioning and lifecycle automatically.

Prerequisites

Before you start, make sure you have:
  • An AWS account with permissions to create IAM roles, EKS clusters, VPCs, and EC2 resources.
  • Terraform >= 1.5.7 installed.
  • The AWS CLI installed and configured (aws configure or environment variables set).
  • kubectl installed for verifying the cluster after creation.

Deploy the cluster

1

Create your Terraform configuration

Create a new directory and add a main.tf file with the following configuration. This uses the general-purpose built-in node pool, which is sufficient for most workloads.
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 21.0"

  name               = "example"
  kubernetes_version = "1.33"

  # Enable the public API endpoint so kubectl can reach the cluster
  endpoint_public_access = true

  # Adds the Terraform caller identity as a cluster administrator
  enable_cluster_creator_admin_permissions = true

  compute_config = {
    enabled    = true
    node_pools = ["general-purpose"]
  }

  vpc_id     = "vpc-1234556abcdef"
  subnet_ids = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"]

  tags = {
    Environment = "dev"
    Terraform   = "true"
  }
}
Replace vpc_id and subnet_ids with the IDs of an existing VPC and private subnets in your AWS account. The subnets must be in at least two availability zones.
If you need to create a VPC as part of the same Terraform configuration, use the terraform-aws-modules/vpc/aws module. See the EKS Auto Mode example for a complete working setup including VPC creation.
2

Initialize Terraform

Download the module and provider plugins:
terraform init
You should see output confirming that the terraform-aws-modules/eks/aws module and the AWS, TLS, and Time providers were installed.
3

Review the execution plan

See exactly what Terraform will create before applying:
terraform plan
The plan will include resources for the EKS cluster, IAM roles, KMS key, security groups, OIDC provider, and CloudWatch log group. EKS Auto Mode does not create EC2 nodes at this stage — nodes are provisioned automatically when you schedule workloads.
4

Apply the configuration

Create the cluster:
terraform apply
Type yes when prompted. Cluster creation typically takes 10–15 minutes. Terraform will print the cluster outputs when complete.
5

Configure kubectl

Update your local kubeconfig so kubectl can connect to the new cluster:
aws eks update-kubeconfig \
  --region us-west-2 \
  --name example
Replace us-west-2 with the region you deployed to and example with the value you set for name in your module configuration.If you used terraform output to capture the cluster name, you can also run:
aws eks update-kubeconfig \
  --region us-west-2 \
  --name $(terraform output -raw cluster_name)
6

Verify the cluster

Confirm the cluster is reachable and the API server is responding:
kubectl cluster-info
Check that the system pods are running:
kubectl get pods -n kube-system
List the cluster nodes (with EKS Auto Mode, nodes appear only after you schedule a workload that requires compute):
kubectl get nodes

Important caveats

Disabling EKS Auto Mode requires an explicit configuration step.Due to the current EKS Auto Mode API, you cannot disable Auto Mode by simply removing the compute_config block. If you try, the apply will fail. To disable EKS Auto Mode, you must first apply with enabled = false:
compute_config = {
  enabled = false
}
Only after that apply succeeds can you safely remove the compute_config block entirely from your configuration.
The enable_cluster_creator_admin_permissions = true setting adds your current IAM identity as a cluster administrator via an EKS access entry. This is different from the one-time bootstrap_cluster_creator_admin_permissions flag on the EKS API, which this module intentionally hardcodes to false. Using an access entry means you can revoke or modify access at any time without recreating the cluster.

Clean up

To destroy all resources created by this configuration:
terraform destroy
This permanently deletes the EKS cluster and all associated resources. Ensure any persistent data (EBS volumes, S3 objects created by workloads, etc.) is backed up before running this command.

Next steps

EKS managed node groups

Use EKS managed node groups for more control over instance types and node configuration.

Cluster access entries

Grant other IAM roles and users access to your cluster.

EKS add-ons

Install and manage EKS add-ons like CoreDNS, VPC CNI, and kube-proxy.

IRSA

Assign AWS IAM permissions to Kubernetes service accounts.

Build docs developers (and LLMs) love