Architecture
EKS architecture components:- Control Plane: Managed by AWS (API server, etcd, scheduler, controller manager)
- Worker Nodes: EC2 instances in private subnets running containerized workloads
- OIDC Provider: Enables IAM Roles for Service Accounts (IRSA)
- IAM Roles: Separate roles for cluster and worker nodes
Resources Created
EKS Cluster
- Resource:
aws_eks_cluster.main - Kubernetes version 1.28 (LTS)
- Control plane managed by AWS
- API server endpoint accessible from VPC and internet
- CloudWatch logging enabled for all components
Managed Node Group
- Resource:
aws_eks_node_group.main - EC2 instances running in private subnets
- Auto-scaling configuration
- Rolling update strategy (max 1 node unavailable)
- Instance type configurable (default:
t3.medium)
IAM Roles
Cluster Role
- Resource:
aws_iam_role.eks_cluster - Allows EKS service to manage AWS resources
- Attached policy:
AmazonEKSClusterPolicy - Permissions: Create load balancers, manage networking, access ECR
Node Role
- Resource:
aws_iam_role.eks_nodes - Allows EC2 instances to join cluster
- Attached policies:
AmazonEKSWorkerNodePolicy- Node registrationAmazonEKS_CNI_Policy- VPC networking for podsAmazonEC2ContainerRegistryReadOnly- Pull images from ECR
OIDC Provider
- Resource:
aws_iam_openid_connect_provider.eks - Enables IAM Roles for Service Accounts (IRSA)
- Allows individual pods to assume IAM roles
- More secure than sharing node IAM role across all pods
Variables
Name of the EKS cluster. Should follow naming convention:
{project_name}-{environment}Deployment environment:
dev, staging, or prodAWS region for cluster deployment
ID of the VPC where the cluster will be created. From networking module output.
IDs of private subnets for EKS worker nodes. Use private subnets from networking module.
EC2 instance type for worker nodes.
t3.medium: 2 vCPUs, 4GB RAM - suitable for dev/stagingt3.large: 2 vCPUs, 8GB RAM - recommended for productiont3.xlarge: 4 vCPUs, 16GB RAM - high-load production
Minimum number of worker nodes. At least 2 recommended for high availability.
Maximum number of worker nodes for auto-scaling during traffic spikes.
Desired number of worker nodes at cluster startup.
Project name for resource tagging and naming
Outputs
Unique identifier of the EKS cluster
Name of the EKS cluster
Kubernetes API server endpoint URL. Used for kubectl configuration.
Base64-encoded certificate authority data for cluster authentication.Sensitive: Do not log or expose in outputs.
Kubernetes version running on the cluster (e.g., “1.28”)
ARN of the managed node group
ARN of the IAM role used by the EKS cluster
ARN of the IAM role used by worker nodes
ARN of the OIDC identity provider. Required for IRSA configuration.
URL of the OIDC provider without
https:// prefix. Used in IAM trust policies for service accounts.Usage Example
Post-Deployment Configuration
Connect kubectl
After cluster creation, configure kubectl to access the cluster:Install Essential Add-ons
AWS Load Balancer Controller
Required for creating Application Load Balancers from Kubernetes Ingress resources:EBS CSI Driver
For persistent volume support:IAM Roles for Service Accounts (IRSA)
The OIDC provider enables pods to assume IAM roles without embedding credentials.Example: S3 Access for Backend Pods
Monitoring and Logging
CloudWatch Logs
Cluster control plane logs are automatically sent to CloudWatch:- API server logs
- Audit logs
- Authenticator logs
- Controller manager logs
- Scheduler logs
/aws/eks/{cluster-name}/cluster.