Skip to main content

Overview

S3 Storage Class Analysis (Analytics) monitors access patterns for objects in a bucket and generates data that helps you decide when to transition objects to a less-frequently accessed storage class. The analysis can export results to a destination S3 bucket for long-term trend analysis in tools such as Amazon QuickSight or Amazon Redshift.

The analytics_configuration variable

variable "analytics_configuration" {
  description = "Map containing bucket analytics configuration."
  type        = any
  default     = {}
}
The map key becomes the name of the aws_s3_bucket_analytics_configuration resource. Each value supports the following fields:
FieldDescription
filter.prefixLimit analysis to objects with this key prefix
filter.tagsLimit analysis to objects with these tags
storage_class_analysis.output_schema_versionSchema version for the exported data (e.g. "V_1")
storage_class_analysis.destination_bucket_arnARN of the S3 bucket where results are exported
storage_class_analysis.destination_account_idAccount ID that owns the destination bucket
storage_class_analysis.export_formatFormat of the exported data. Defaults to "CSV"
storage_class_analysis.export_prefixKey prefix for exported analytics data

How the resource is created

For each entry in analytics_configuration the module creates one aws_s3_bucket_analytics_configuration resource:
resource "aws_s3_bucket_analytics_configuration" "this" {
  for_each = { for k, v in var.analytics_configuration : k => v if local.create_bucket && !var.is_directory_bucket }

  bucket = aws_s3_bucket.this[0].id
  name   = each.key

  dynamic "filter" {
    for_each = length(try(flatten([each.value.filter]), [])) == 0 ? [] : [true]

    content {
      prefix = try(each.value.filter.prefix, null)
      tags   = try(each.value.filter.tags, null)
    }
  }

  dynamic "storage_class_analysis" {
    for_each = length(try(flatten([each.value.storage_class_analysis]), [])) == 0 ? [] : [true]

    content {
      data_export {
        output_schema_version = try(each.value.storage_class_analysis.output_schema_version, null)

        destination {
          s3_bucket_destination {
            bucket_arn        = try(each.value.storage_class_analysis.destination_bucket_arn, aws_s3_bucket.this[0].arn)
            bucket_account_id = try(each.value.storage_class_analysis.destination_account_id, data.aws_caller_identity.current.id)
            format            = try(each.value.storage_class_analysis.export_format, "CSV")
            prefix            = try(each.value.storage_class_analysis.export_prefix, null)
          }
        }
      }
    }
  }
}

Cross-account analytics delivery

When the analytics destination bucket is in a different account from the source bucket, the destination bucket needs a policy that allows the S3 service to write analytics data. The module automates this with three variables:
VariableTypeDescription
attach_analytics_destination_policyboolSet to true on the destination bucket to attach the required policy
analytics_source_bucket_arnstringARN of the source bucket that will deliver analytics data
analytics_source_account_idstringAccount ID of the source bucket owner
analytics_self_source_destinationboolSet to true when the source and destination bucket are the same
Unlike attach_inventory_destination_policy, the attach_analytics_destination_policy flag is not included in the module’s internal attach_policy local. You must also set attach_inventory_destination_policy = true or another attach_* policy flag to trigger the bucket policy resource creation, or set attach_policy = true with a custom policy as the combined trigger.

Examples

Analyse only the documents/ prefix. No export destination is configured — results are visible in the AWS Console.
module "analytics_bucket" {
  source = "terraform-aws-modules/s3-bucket/aws"

  bucket = "my-bucket"

  analytics_configuration = {
    prefix_documents = {
      filter = {
        prefix = "documents/"
      }
    }
  }
}

How the destination policy works

When attach_analytics_destination_policy = true, the module attaches a policy based on aws_iam_policy_document.inventory_and_analytics_destination_policy. This policy allows the S3 service to write analytics data to the destination bucket from the specified source account and source bucket ARN:
statement {
  sid    = "destinationInventoryAndAnalyticsPolicy"
  effect = "Allow"

  actions = ["s3:PutObject"]

  principals {
    type        = "Service"
    identifiers = ["s3.amazonaws.com"]
  }

  condition {
    test     = "ArnLike"
    variable = "aws:SourceArn"
    values   = [var.analytics_source_bucket_arn]  # or the bucket's own ARN when self_source_destination
  }

  condition {
    test     = "StringEquals"
    variable = "aws:SourceAccount"
    values   = [var.analytics_source_account_id]
  }

  condition {
    test     = "StringEquals"
    variable = "s3:x-amz-acl"
    values   = ["bucket-owner-full-control"]
  }
}
Analytics configuration is not supported on S3 Directory Buckets. The aws_s3_bucket_analytics_configuration resource is skipped automatically when is_directory_bucket = true.

Build docs developers (and LLMs) love