Overview
The EKS module creates a production-ready Amazon EKS (Elastic Kubernetes Service) cluster with a managed node group, IAM roles with required policies, and an OIDC provider for IAM Roles for Service Accounts (IRSA).Features
Managed Node Groups
Auto-scaling node groups with configurable instance types and capacity
OIDC Provider
Automatic OIDC setup for IRSA (IAM Roles for Service Accounts)
IAM Roles
Pre-configured cluster and node group IAM roles with required policies
Control Plane Logging
Optional CloudWatch logging for API server, audit, and other components
Private Endpoint
Cluster endpoint accessible only within VPC by default
Multi-AZ Support
Node groups deployed across multiple availability zones
Usage Examples
Basic Configuration
Production Configuration
With VPC Module
Compose with the VPC module:Spot Instances for Cost Savings
Connecting to the Cluster
Update kubeconfig
Verify Connection
Using Terraform Output
IRSA (IAM Roles for Service Accounts)
The module automatically creates an OIDC provider for IRSA:Creating an IRSA Role
Kubernetes Service Account
Using in a Pod
IRSA provides temporary AWS credentials to pods without needing to store access keys. This is the recommended approach for AWS access from EKS.
Inputs
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
cluster_name | Name of the EKS cluster | string | n/a | yes |
kubernetes_version | Kubernetes version | string | "1.28" | no |
private_subnet_ids | Private subnet IDs (minimum 3 for HA) | list(string) | n/a | yes |
cluster_log_types | Control plane log types to enable | list(string) | ["api", "audit", "authenticator", "controllerManager", "scheduler"] | no |
primary_node_group_instance_types | Instance types for the node group | list(string) | ["m5.large"] | no |
primary_node_group_capacity_type | ON_DEMAND or SPOT | string | "ON_DEMAND" | no |
primary_node_group_disk_size | Disk size in GB | number | 20 | no |
primary_node_group_desired_size | Desired node count | number | 2 | no |
primary_node_group_min_size | Minimum node count | number | 2 | no |
primary_node_group_max_size | Maximum node count | number | 2 | no |
primary_node_group_labels | Labels for the node group | map(string) | {} | no |
cluster_security_group_ids | Additional security group IDs | list(string) | [] | no |
tags | Tags to apply to all resources | map(string) | {} | no |
Outputs
| Name | Description |
|---|---|
cluster_id | The name/id of the EKS cluster |
cluster_arn | The ARN of the EKS cluster |
cluster_endpoint | Endpoint for EKS control plane |
cluster_version | The Kubernetes version |
cluster_security_group_id | Security group ID of the cluster |
cluster_certificate_authority_data | Base64 encoded certificate data (sensitive) |
cluster_iam_role_arn | IAM role ARN of the cluster |
node_group_iam_role_arn | IAM role ARN of the node group |
primary_node_group_id | Node group ID |
primary_node_group_arn | Node group ARN |
oidc_provider_url | OIDC provider URL (for IRSA) |
oidc_provider_arn | OIDC provider ARN (for IRSA) |
Design Decisions
Private Endpoint Only
Private Endpoint Only
The cluster endpoint is only accessible within the VPC (
endpoint_public_access = false). Access requires:- VPN connection to VPC
- Bastion host in public subnet
- AWS Cloud9 environment
- EC2 instance with kubectl
Managed Node Groups
Managed Node Groups
The module uses managed node groups instead of self-managed nodes or Fargate:
- Pros: Automatic updates, simplified management, integrated autoscaling
- Cons: Less customization than self-managed nodes
Minimum 3 Subnets
Minimum 3 Subnets
The module enforces a minimum of 3 private subnets for:
- High availability across multiple AZs
- EKS control plane requirements
- Better pod distribution
OIDC Provider
OIDC Provider
The OIDC provider is created automatically to enable IRSA. This is required for:
- AWS Load Balancer Controller
- External Secrets Operator
- EBS CSI Driver
- Any workload needing AWS API access
Post-Deployment Setup
Install Essential Add-ons
- AWS Load Balancer Controller
- EBS CSI Driver
- Cluster Autoscaler
Troubleshooting
Cannot Connect to Cluster
Error:Node Group Not Scaling
Check Cluster Autoscaler logs:IRSA Not Working
Verify OIDC provider:Best Practices
Use Latest Kubernetes Version
Keep clusters updated to the latest supported version for security patches and features.
Enable Control Plane Logging
Enable all log types in production for audit trails and troubleshooting.
Right-Size Nodes
Start with smaller instances and scale up based on actual usage metrics.
Use IRSA
Always use IRSA instead of instance profiles or hardcoded credentials.
Multi-AZ Deployment
Deploy node groups across at least 3 availability zones for resilience.
Resource Quotas
Implement Kubernetes resource quotas to prevent resource exhaustion.
Related Documentation
VPC Module
Create VPC with subnets for EKS
Vault Module
Deploy Vault with IRSA
K8s Scheduler Deployment
Deploy K8s Scheduler on EKS
Infrastructure Guide
Complete infrastructure deployment