Skip to main content
This page explains Amazon ECS concepts in terms of how the module represents and configures them. If you are already familiar with ECS, use this as a map between AWS concepts and module variables.

ECS cluster

An Amazon ECS cluster is a logical grouping of compute resources. Tasks run on the capacity provided by the cluster’s capacity providers. The cluster itself does not execute workloads — it acts as an organizational and scheduling boundary. The cluster sub-module creates the aws_ecs_cluster and aws_ecs_cluster_capacity_providers resources. Key configuration points:
  • Capacity providers — specify which compute types the cluster can use (FARGATE, FARGATE_SPOT, or custom EC2-based providers defined in capacity_providers).
  • Default capacity provider strategy — set via default_capacity_provider_strategy. Each entry specifies a weight and an optional base (minimum number of tasks always placed on this provider). Services can override this strategy with their own capacity_provider_strategy.
  • Container Insights and Execute Command — configured through the configuration and setting variables, which map to the aws_ecs_cluster resource’s configuration and setting blocks.
module "cluster" {
  source  = "terraform-aws-modules/ecs/aws//modules/cluster"
  version = "~> 7.0"

  name = "production"

  cluster_capacity_providers = ["FARGATE", "FARGATE_SPOT"]

  default_capacity_provider_strategy = {
    fargate_spot = {
      weight = 60
    }
    fargate = {
      weight = 40
      base   = 1 # always keep at least 1 task on on-demand Fargate
    }
  }
}

ECS service

An ECS service manages the lifecycle of tasks derived from a task definition or task set. It is responsible for:
  • Maintaining desired task count — scheduling new tasks when running tasks stop.
  • Deployment management — rolling updates, circuit breaker behavior, blue/green deployments with CodeDeploy.
  • Load balancer integration — registering and deregistering task IPs or instances with target groups.
  • Service discovery and Service Connect — registering tasks with AWS Cloud Map or the ECS Service Connect mesh.
The service sub-module creates one aws_ecs_service resource. Depending on whether ignore_task_definition_changes is set, a different resource definition is used internally (see below).

Deployment controller

The service module supports three deployment controller types via deployment_controller:
TypeDescription
ECS (default)ECS manages rolling deployments.
CODE_DEPLOYAWS CodeDeploy manages blue/green deployments. Use with ignore_task_definition_changes = true.
EXTERNALAn external system manages task sets. The module creates a task set resource instead of (or alongside) the service.

Scheduling strategy

  • REPLICA — ECS maintains a fixed number of tasks (the default for Fargate).
  • DAEMON — ECS places exactly one task on each active EC2 container instance. Not available on Fargate.

Task definition

A task definition is the blueprint for one or more containers. It specifies:
  • CPU and memory — set via cpu and memory at the task level (required for Fargate).
  • Network modeawsvpc (required for Fargate), bridge, host, or none.
  • Container definitions — one or more container configurations (see Container definition below).
  • Volumes — EBS, EFS, Docker, FSx, or bind-mount volumes attached to the task.
  • Runtime platform — CPU architecture (X86_64 or ARM64) and OS family (LINUX, WINDOWS_SERVER_*).
  • IAM roles — the task execution role (used during task launch) and the tasks IAM role (used by containers at runtime).
The service sub-module creates aws_ecs_task_definition when create_task_definition = true. You can also supply a pre-existing task definition ARN via task_definition_arn.
module "service" {
  source  = "terraform-aws-modules/ecs/aws//modules/service"
  version = "~> 7.0"

  name        = "api"
  cluster_arn = module.cluster.arn

  cpu    = 1024
  memory = 2048

  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]

  runtime_platform = {
    cpu_architecture        = "ARM64"
    operating_system_family = "LINUX"
  }

  subnet_ids = ["subnet-abc", "subnet-def"]
}

Task definition revision tracking

The module always uses the latest revision of the task definition. It uses max(aws_ecs_task_definition.this[0].revision, data.aws_ecs_task_definition.this[0].revision) to ensure that whichever revision is highest — the one Terraform just registered, or one registered externally — is what the service uses. This means external changes to the task definition are picked up without conflicting with Terraform.

Task set

Task sets are used when the deployment controller is set to EXTERNAL. In this pattern, an external system (such as a custom deployment pipeline) is responsible for managing the task set’s lifecycle — scaling it up and down, routing traffic, and eventually deleting it. The service sub-module creates aws_ecs_task_set when deployment_controller.type = "EXTERNAL" and create_task_definition = true. Task sets require a task definition and support the same network, load balancer, and service discovery configuration as services.
When using an external deployment controller, most configuration that is normally on the service resource (network configuration, load balancer, service registries) is instead specified on the task set. The module handles this distinction automatically.

Container definition

A container definition is the configuration for a single container within a task. A task can have between 1 and 10 container definitions. The module represents each container definition as an entry in the container_definitions map:
container_definitions = {
  # Key becomes the container name unless `name` is explicitly set
  app = {
    image  = "public.ecr.aws/nginx/nginx:1.27"
    cpu    = 512
    memory = 1024

    portMappings = [{
      containerPort = 80
      protocol      = "tcp"
    }]

    environment = [
      { name = "ENV", value = "production" }
    ]

    secrets = [
      { name = "DB_PASSWORD", valueFrom = "arn:aws:secretsmanager:..." }
    ]
  }

  # Sidecar container
  fluent-bit = {
    image = data.aws_ssm_parameter.fluentbit.value
    firelensConfiguration = { type = "fluentbit" }
    essential = false
  }
}
The container-definition sub-module converts each map entry into a JSON-compatible container definition object. It strips null values before JSON encoding so the ECS API receives only the fields that were explicitly set.

Logging

The container definition sub-module manages CloudWatch log groups on behalf of containers. The default behavior creates a log group at /aws/ecs/<service>/<container> so that it is fully managed through Terraform (tagged, retention-controlled, KMS-encrypted).
container_definitions = {
  app = {
    image = "public.ecr.aws/nginx/nginx:1.27"
    # Log group is created and managed by Terraform (default)
  }
}

Capacity providers

Capacity providers determine where and how tasks are placed. The module supports all ECS capacity provider types.
Fargate is AWS-managed serverless compute. No EC2 instances to manage — you pay per task CPU/memory second.
  • FARGATE — on-demand pricing, tasks are not interrupted.
  • FARGATE_SPOT — Spot pricing (up to 70% cheaper), but tasks may be reclaimed by AWS with a 2-minute warning.
Reference these by name in cluster_capacity_providers and default_capacity_provider_strategy:
cluster_capacity_providers = ["FARGATE", "FARGATE_SPOT"]

default_capacity_provider_strategy = {
  fargate_spot = { weight = 80 }
  fargate      = { weight = 20, base = 1 }
}
Constraints:
  • Not compatible with EC2-based capacity providers on the same cluster.
  • Fargate Spot tasks can be interrupted; design stateless services or handle SIGTERM gracefully.
You create and own the Auto Scaling Group (typically via the terraform-aws-autoscaling module) and associate it with the cluster through the capacity_providers variable. ECS uses managed scaling to adjust the ASG’s desired capacity based on task demand.
capacity_providers = {
  my_asg = {
    name = aws_ecs_capacity_provider.asg.name
    auto_scaling_group_provider = {
      auto_scaling_group_arn = module.autoscaling.autoscaling_group_arn
      managed_scaling = {
        status         = "ENABLED"
        target_capacity = 80
      }
      managed_termination_protection = "ENABLED"
    }
  }
}
Constraints:
  • You are responsible for the EC2 AMI, instance type, and ASG configuration.
  • Managed termination protection requires managed scaling to be enabled.
  • Not compatible with Fargate capacity providers on the same cluster.
ECS Managed Instances is a fully-managed EC2 compute option where ECS provisions and manages the EC2 fleet on your behalf using the managed_instances_provider block. You supply instance requirements (CPU, memory, architecture) rather than a specific instance type.
capacity_providers = {
  managed = {
    managed_instances_provider = {
      instance_launch_template = {
        capacity_option_type = "SPOT"
        instance_requirements = {
          vcpu_count  = { min = 2, max = 8 }
          memory_mib  = { min = 4096, max = 16384 }
        }
        network_configuration = {
          subnets = ["subnet-abc"]
        }
      }
    }
  }
}
This provider type requires additional IAM roles (infrastructure role and node role) and a security group. The cluster sub-module creates all of these automatically when create_infrastructure_iam_role = true and create_node_iam_instance_profile = true.Constraints:
  • Not compatible with Fargate capacity providers.
  • The infrastructure role must be created before the capacity provider (the module handles this with depends_on).
  • The managed AWS policies for these roles have surprising naming requirements (role must start with ecsInstanceRole). This module avoids that requirement by creating custom inline-equivalent policies.

desired_count behavior

desired_count is always ignored by the service module after initial creation. This is an intentional design decision:
ECS services are almost always paired with Application Auto Scaling, which manages desired_count based on CloudWatch metrics. If Terraform were also allowed to manage desired_count, it would conflict with the scaler every time terraform apply runs — resetting the task count back to whatever is in the Terraform state.
The desired_count variable sets the initial count when the service is first created. After that, Auto Scaling (or manual intervention) owns the value. The lifecycle { ignore_changes = [desired_count] } block in the service resource enforces this. If you need to change the task count from Terraform without re-creating the service, use a null_resource with a local-exec provisioner:
resource "null_resource" "update_desired_count" {
  triggers = {
    desired_count = 3
  }

  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]

    command = <<-EOT
      aws ecs update-service \
        --cluster ${module.ecs.cluster_name} \
        --service ${module.ecs_service.name} \
        --desired-count ${self.triggers.desired_count}
    EOT
  }
}

ignore_task_definition_changes

When ignore_task_definition_changes = true, the module selects a different internal aws_ecs_service resource definition that adds task_definition and load_balancer to the lifecycle { ignore_changes } block in addition to desired_count.
Changing ignore_task_definition_changes after the service is created forces a full service replacement. This is a consequence of Terraform not supporting dynamic lifecycle blocks. Change this setting only during initial provisioning.
Use ignore_task_definition_changes = true when:
  • An external deployment controller (e.g., CodeDeploy, a custom pipeline) is responsible for updating the task definition and load balancer configuration.
  • You are using Blue/Green deployment with CodeDeploy, which modifies the load_balancer configuration on the service.
module "ecs_service" {
  source  = "terraform-aws-modules/ecs/aws//modules/service"
  version = "~> 7.0"

  name                           = "api"
  cluster_arn                    = module.cluster.arn
  ignore_task_definition_changes = true

  # CodeDeploy manages traffic shifting between blue and green target groups
  deployment_controller = { type = "CODE_DEPLOY" }

  # ... other configuration
}
When ignore_task_definition_changes = false (the default), Terraform manages the task definition and any image or container definition changes flow through terraform apply. You can still allow an external party to update the image tag by reading the tag from a shared location such as SSM Parameter Store and referencing it in the container definition — see the design doc rationale for details.

Build docs developers (and LLMs) love