ECS cluster
An Amazon ECS cluster is a logical grouping of compute resources. Tasks run on the capacity provided by the cluster’s capacity providers. The cluster itself does not execute workloads — it acts as an organizational and scheduling boundary. The cluster sub-module creates theaws_ecs_cluster and aws_ecs_cluster_capacity_providers resources. Key configuration points:
- Capacity providers — specify which compute types the cluster can use (
FARGATE,FARGATE_SPOT, or custom EC2-based providers defined incapacity_providers). - Default capacity provider strategy — set via
default_capacity_provider_strategy. Each entry specifies aweightand an optionalbase(minimum number of tasks always placed on this provider). Services can override this strategy with their owncapacity_provider_strategy. - Container Insights and Execute Command — configured through the
configurationandsettingvariables, which map to theaws_ecs_clusterresource’sconfigurationandsettingblocks.
ECS service
An ECS service manages the lifecycle of tasks derived from a task definition or task set. It is responsible for:- Maintaining desired task count — scheduling new tasks when running tasks stop.
- Deployment management — rolling updates, circuit breaker behavior, blue/green deployments with CodeDeploy.
- Load balancer integration — registering and deregistering task IPs or instances with target groups.
- Service discovery and Service Connect — registering tasks with AWS Cloud Map or the ECS Service Connect mesh.
aws_ecs_service resource. Depending on whether ignore_task_definition_changes is set, a different resource definition is used internally (see below).
Deployment controller
The service module supports three deployment controller types viadeployment_controller:
| Type | Description |
|---|---|
ECS (default) | ECS manages rolling deployments. |
CODE_DEPLOY | AWS CodeDeploy manages blue/green deployments. Use with ignore_task_definition_changes = true. |
EXTERNAL | An external system manages task sets. The module creates a task set resource instead of (or alongside) the service. |
Scheduling strategy
REPLICA— ECS maintains a fixed number of tasks (the default for Fargate).DAEMON— ECS places exactly one task on each active EC2 container instance. Not available on Fargate.
Task definition
A task definition is the blueprint for one or more containers. It specifies:- CPU and memory — set via
cpuandmemoryat the task level (required for Fargate). - Network mode —
awsvpc(required for Fargate),bridge,host, ornone. - Container definitions — one or more container configurations (see Container definition below).
- Volumes — EBS, EFS, Docker, FSx, or bind-mount volumes attached to the task.
- Runtime platform — CPU architecture (
X86_64orARM64) and OS family (LINUX,WINDOWS_SERVER_*). - IAM roles — the task execution role (used during task launch) and the tasks IAM role (used by containers at runtime).
aws_ecs_task_definition when create_task_definition = true. You can also supply a pre-existing task definition ARN via task_definition_arn.
Task definition revision tracking
The module always uses the latest revision of the task definition. It usesmax(aws_ecs_task_definition.this[0].revision, data.aws_ecs_task_definition.this[0].revision) to ensure that whichever revision is highest — the one Terraform just registered, or one registered externally — is what the service uses. This means external changes to the task definition are picked up without conflicting with Terraform.
Task set
Task sets are used when the deployment controller is set toEXTERNAL. In this pattern, an external system (such as a custom deployment pipeline) is responsible for managing the task set’s lifecycle — scaling it up and down, routing traffic, and eventually deleting it.
The service sub-module creates aws_ecs_task_set when deployment_controller.type = "EXTERNAL" and create_task_definition = true. Task sets require a task definition and support the same network, load balancer, and service discovery configuration as services.
When using an external deployment controller, most configuration that is normally on the service resource (network configuration, load balancer, service registries) is instead specified on the task set. The module handles this distinction automatically.
Container definition
A container definition is the configuration for a single container within a task. A task can have between 1 and 10 container definitions. The module represents each container definition as an entry in thecontainer_definitions map:
container-definition sub-module converts each map entry into a JSON-compatible container definition object. It strips null values before JSON encoding so the ECS API receives only the fields that were explicitly set.
Logging
The container definition sub-module manages CloudWatch log groups on behalf of containers. The default behavior creates a log group at/aws/ecs/<service>/<container> so that it is fully managed through Terraform (tagged, retention-controlled, KMS-encrypted).
- Default: Terraform-managed log group
- ECS-managed log group
- Disable logging
- Firelens (FluentBit)
Capacity providers
Capacity providers determine where and how tasks are placed. The module supports all ECS capacity provider types.Fargate and Fargate Spot
Fargate and Fargate Spot
Fargate is AWS-managed serverless compute. No EC2 instances to manage — you pay per task CPU/memory second.Constraints:
FARGATE— on-demand pricing, tasks are not interrupted.FARGATE_SPOT— Spot pricing (up to 70% cheaper), but tasks may be reclaimed by AWS with a 2-minute warning.
cluster_capacity_providers and default_capacity_provider_strategy:- Not compatible with EC2-based capacity providers on the same cluster.
- Fargate Spot tasks can be interrupted; design stateless services or handle
SIGTERMgracefully.
EC2 Auto Scaling Group
EC2 Auto Scaling Group
You create and own the Auto Scaling Group (typically via the Constraints:
terraform-aws-autoscaling module) and associate it with the cluster through the capacity_providers variable. ECS uses managed scaling to adjust the ASG’s desired capacity based on task demand.- You are responsible for the EC2 AMI, instance type, and ASG configuration.
- Managed termination protection requires managed scaling to be enabled.
- Not compatible with Fargate capacity providers on the same cluster.
ECS Managed Instances
ECS Managed Instances
ECS Managed Instances is a fully-managed EC2 compute option where ECS provisions and manages the EC2 fleet on your behalf using the This provider type requires additional IAM roles (infrastructure role and node role) and a security group. The cluster sub-module creates all of these automatically when
managed_instances_provider block. You supply instance requirements (CPU, memory, architecture) rather than a specific instance type.create_infrastructure_iam_role = true and create_node_iam_instance_profile = true.Constraints:- Not compatible with Fargate capacity providers.
- The infrastructure role must be created before the capacity provider (the module handles this with
depends_on). - The managed AWS policies for these roles have surprising naming requirements (role must start with
ecsInstanceRole). This module avoids that requirement by creating custom inline-equivalent policies.
desired_count behavior
desired_count is always ignored by the service module after initial creation. This is an intentional design decision:
ECS services are almost always paired with Application Auto Scaling, which manages
desired_count based on CloudWatch metrics. If Terraform were also allowed to manage desired_count, it would conflict with the scaler every time terraform apply runs — resetting the task count back to whatever is in the Terraform state.desired_count variable sets the initial count when the service is first created. After that, Auto Scaling (or manual intervention) owns the value. The lifecycle { ignore_changes = [desired_count] } block in the service resource enforces this.
If you need to change the task count from Terraform without re-creating the service, use a null_resource with a local-exec provisioner:
ignore_task_definition_changes
When ignore_task_definition_changes = true, the module selects a different internal aws_ecs_service resource definition that adds task_definition and load_balancer to the lifecycle { ignore_changes } block in addition to desired_count.
Use ignore_task_definition_changes = true when:
- An external deployment controller (e.g., CodeDeploy, a custom pipeline) is responsible for updating the task definition and load balancer configuration.
- You are using Blue/Green deployment with CodeDeploy, which modifies the
load_balancerconfiguration on the service.
ignore_task_definition_changes = false (the default), Terraform manages the task definition and any image or container definition changes flow through terraform apply. You can still allow an external party to update the image tag by reading the tag from a shared location such as SSM Parameter Store and referencing it in the container definition — see the design doc rationale for details.
