Skip to main content

Quickstart

Deploy your first EKS cluster in minutes with a working Terraform configuration.

Compute Resources

Choose between EKS Auto Mode, managed node groups, self-managed nodes, and Fargate.

Configuration

Configure networking, access control, add-ons, IRSA, and encryption.

Reference

Complete reference for all module inputs and outputs.

What is terraform-aws-eks?

terraform-aws-eks is a community-maintained Terraform module that creates production-ready Amazon EKS clusters. It wraps the AWS EKS service with opinionated defaults and exposes a rich set of configuration options — so you get a secure, well-configured cluster without writing hundreds of lines of Terraform from scratch. The module handles the full lifecycle of an EKS cluster:
  • Control plane — EKS cluster, IAM roles, KMS encryption, CloudWatch logging, security groups
  • Data plane — EKS managed node groups, self-managed node groups, Fargate profiles, EKS Auto Mode
  • Access — Cluster access entries, IRSA (IAM Roles for Service Accounts), OIDC provider
  • Add-ons — CoreDNS, kube-proxy, VPC CNI, EKS Pod Identity Agent, and any community add-on
terraform-aws-eks usage
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 21.0"

  name               = "my-cluster"
  kubernetes_version = "1.33"

  vpc_id     = "vpc-1234556abcdef"
  subnet_ids = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"]

  # EKS Auto Mode — fully managed data plane
  compute_config = {
    enabled    = true
    node_pools = ["general-purpose"]
  }

  enable_cluster_creator_admin_permissions = true

  tags = {
    Environment = "production"
    Terraform   = "true"
  }
}

Key features

EKS Auto Mode

Fully managed data plane with AWS-controlled node pools. No node group management required.

Managed Node Groups

AWS-managed EC2 node groups with custom launch templates, AMI types, and auto-repair.

Self-Managed Nodes

Full control over EC2 Auto Scaling groups, mixed instance policies, and lifecycle hooks.

Fargate Profiles

Serverless compute — run pods without managing EC2 instances.

Karpenter

Intelligent, just-in-time node provisioning via the included Karpenter sub-module.

EKS Hybrid Nodes

Connect on-premises and edge nodes to your EKS cluster over SSM or IAM Roles Anywhere.

IRSA

OIDC-based IAM Roles for Service Accounts — fine-grained pod-level AWS permissions.

Cluster Access Entries

API-based IAM authentication. Supports STANDARD, EC2_LINUX, FARGATE_LINUX, and HYBRID_LINUX types.

KMS Encryption

Automatic KMS key creation and secret encryption with configurable key rotation.

Compute options at a glance

AWS manages the entire data plane. You define node pool types; AWS handles provisioning, scaling, patching, and termination.
compute_config = {
  enabled    = true
  node_pools = ["general-purpose", "system"]
}
Best for: Teams that want minimal operational overhead and are comfortable with AWS-managed nodes.

Requirements

RequirementVersion
Terraform>= 1.5.7
AWS Provider>= 6.28
time Provider>= 0.9
tls Provider>= 4.0

Getting started

1

Add the module to your Terraform configuration

Reference the module from the Terraform Registry and pin to a version:
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 21.0"
}
2

Configure your cluster

Set the required variables — cluster name, Kubernetes version, VPC, and subnets. Choose your compute strategy.
3

Initialize and apply

terraform init
terraform plan
terraform apply
4

Access your cluster

aws eks update-kubeconfig --name my-cluster --region us-east-1
kubectl get nodes

Follow the full quickstart guide

Walk through a complete working example from zero to a running EKS cluster.

Build docs developers (and LLMs) love