Skip to main content
The terraform-aws-eks module supports four distinct compute strategies for running workloads on Amazon EKS. Each option offers different trade-offs in control, operational overhead, and cost.

Compute options

EKS Auto Mode

Fully managed compute with automatic node provisioning, scaling, and lifecycle management. AWS manages the underlying EC2 fleet.

EKS Managed Node Groups

AWS-managed Auto Scaling Groups where EKS handles node provisioning and lifecycle, but you retain control over instance configuration via launch templates.

Self-Managed Node Groups

Fully self-managed Auto Scaling Groups. You control every aspect of the node lifecycle, launch template, and bootstrap process.

Fargate Profiles

Serverless compute for pods. No nodes to manage — AWS provisions isolated compute for each pod matching your selectors.

Comparison

FeatureEKS Auto ModeManaged Node GroupsSelf-ManagedFargate
Node managementAWSAWSYouAWS (serverless)
Launch templateNot applicableCustom (default on)Full controlNot applicable
Custom AMINoYesYesNo
Spot / On-Demand mixAutomaticVia capacity typeVia mixed instances policyNo
EFA supportNoYesYesNo
Bottlerocket OSNoYesYesNo
Windows nodesNoYesYesNo
Bin packing / cost optimizationAutomaticManualManualPer-pod billing
Best forSimplicity, managed operationsStandard workloads needing customizationAdvanced control or custom AMIsBatch, burst, or isolation workloads

Default node group configurations

All node group types inherit cluster-level defaults unless overridden at the node group level. The module waits 30s after the EKS cluster becomes active before creating any node groups (configurable via dataplane_wait_duration). For eks_managed_node_groups and self_managed_node_groups, when no values are specified:
  • min_size defaults to 1
  • max_size defaults to 3
  • desired_size defaults to 1
  • ami_type defaults to AL2023_x86_64_STANDARD
  • instance_type / instance_types defaults to t3.medium (managed) or m6i.large (self-managed)
The desired_size value is only respected on initial creation. Subsequent changes are ignored to avoid conflicts with the cluster autoscaler or Karpenter managing the desired count externally.

Combining compute types

The module supports running multiple compute types simultaneously in the same cluster. A common pattern is using Fargate for system components while managed node groups serve application workloads, or enabling EKS Auto Mode alongside a self-managed node group for GPU workloads that require specific drivers.
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 21.0"

  name               = "my-cluster"
  kubernetes_version = "1.33"

  # Fargate for system workloads
  fargate_profiles = {
    kube-system = {
      selectors = [
        { namespace = "kube-system" }
      ]
    }
  }

  # Managed node groups for application workloads
  eks_managed_node_groups = {
    app = {
      ami_type       = "AL2023_x86_64_STANDARD"
      instance_types = ["m6i.large"]
      min_size       = 2
      max_size       = 10
      desired_size   = 2
    }
  }

  vpc_id     = "vpc-1234556abcdef"
  subnet_ids = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"]

  tags = {
    Environment = "production"
    Terraform   = "true"
  }
}

Build docs developers (and LLMs) love