Skip to main content
The S3 import feature allows you to create a new RDS MySQL instance populated with data from a Percona Xtrabackup stored in an S3 bucket. This is the primary mechanism for migrating on-premises or self-managed MySQL databases into RDS.
S3 import is only supported for MySQL. It is not available for PostgreSQL, Oracle, or SQL Server. The backup must be in Percona Xtrabackup format — mysqldump exports are not supported by this feature.

Configuration

main.tf
provider "aws" {
  region = local.region
}

data "aws_availability_zones" "available" {}

locals {
  name   = "s3-import"
  region = "eu-west-1"

  vpc_cidr = "10.0.0.0/16"
  azs      = slice(data.aws_availability_zones.available.names, 0, 3)

  tags = {
    Name       = local.name
    Example    = local.name
    Repository = "https://github.com/terraform-aws-modules/terraform-aws-rds"
  }
}

################################################################################
# RDS Module
################################################################################

module "db" {
  source = "terraform-aws-modules/rds/aws"

  identifier = local.name

  # All available versions: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.VersionMgmt
  engine               = "mysql"
  engine_version       = "8.0.43"
  family               = "mysql8.0" # DB parameter group
  major_engine_version = "8.0"      # DB option group
  instance_class       = "db.t4g.large"

  allocated_storage     = 20
  max_allocated_storage = 100

  db_name  = "s3Import"
  username = "s3_import_user"
  port     = 3306

  # S3 import https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.html
  s3_import = {
    source_engine_version = "8.0.43"
    bucket_name           = module.import_s3_bucket.s3_bucket_id
    ingestion_role        = aws_iam_role.s3_import.arn
  }

  multi_az               = true
  db_subnet_group_name   = module.vpc.database_subnet_group
  vpc_security_group_ids = [module.security_group.security_group_id]

  maintenance_window              = "Mon:00:00-Mon:03:00"
  backup_window                   = "03:00-06:00"
  enabled_cloudwatch_logs_exports = ["audit", "general"]

  backup_retention_period = 0
  skip_final_snapshot     = true
  deletion_protection     = false

  tags = local.tags
}

################################################################################
# Supporting Resources
################################################################################

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 6.0"

  name = local.name
  cidr = local.vpc_cidr

  azs              = local.azs
  public_subnets   = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k)]
  private_subnets  = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 3)]
  database_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 6)]

  create_database_subnet_group = true

  tags = local.tags
}

module "security_group" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "~> 5.0"

  name        = local.name
  description = "S3 import VPC example security group"
  vpc_id      = module.vpc.vpc_id

  # ingress
  ingress_with_self = [
    {
      rule        = "https-443-tcp"
      description = "Allow all internal HTTPs"
    },
  ]

  ingress_with_cidr_blocks = [
    {
      from_port   = 3306
      to_port     = 3306
      protocol    = "tcp"
      description = "MySQL access from within VPC"
      cidr_blocks = module.vpc.vpc_cidr_block
    },
  ]

  # egress
  computed_egress_with_self = [
    {
      rule        = "https-443-tcp"
      description = "Allow all internal HTTPs"
    },
  ]
  number_of_computed_egress_with_self = 1

  egress_cidr_blocks = ["0.0.0.0/0"]
  egress_rules       = ["all-all"]

  tags = local.tags
}

module "import_s3_bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "~> 5.0"

  bucket_prefix = "${local.name}-"
  force_destroy = true

  tags = local.tags
}

data "aws_iam_policy_document" "s3_import_assume" {
  statement {
    actions = [
      "sts:AssumeRole",
    ]

    principals {
      type        = "Service"
      identifiers = ["rds.amazonaws.com"]
    }
  }
}

resource "aws_iam_role" "s3_import" {
  name_prefix           = "${local.name}-"
  description           = "IAM role to allow RDS to import MySQL backup from S3"
  assume_role_policy    = data.aws_iam_policy_document.s3_import_assume.json
  force_detach_policies = true

  tags = local.tags
}

data "aws_iam_policy_document" "s3_import" {
  statement {
    actions = [
      "s3:ListBucket",
      "s3:GetBucketLocation",
    ]

    resources = [
      module.import_s3_bucket.s3_bucket_arn
    ]
  }

  statement {
    actions = [
      "s3:GetObject",
    ]

    resources = [
      "${module.import_s3_bucket.s3_bucket_arn}/*",
    ]
  }
}

resource "aws_iam_role_policy" "s3_import" {
  name_prefix = "${local.name}-"
  role        = aws_iam_role.s3_import.id
  policy      = data.aws_iam_policy_document.s3_import.json

  # We need the files uploaded before the RDS instance is created, and the instance
  # also needs this role so this is an easy way of ensuring the backup is uploaded before
  # the instance creation starts
  provisioner "local-exec" {
    command = "unzip backup.zip && aws s3 sync ${path.module}/backup s3://${module.import_s3_bucket.s3_bucket_id}"
  }
}

The s3_import variable

The s3_import variable is an object with the following fields:
FieldRequiredDescription
source_engine_versionYesThe MySQL version of the Xtrabackup. Must match the full patch version of the backup (e.g. "8.0.43").
bucket_nameYesThe name of the S3 bucket containing the backup files.
bucket_prefixNoThe key prefix (folder path) within the bucket where backup files are located. Omit to use the bucket root.
ingestion_roleYesThe ARN of an IAM role that grants RDS permission to read from the S3 bucket.
s3_import = {
  source_engine_version = "8.0.43"
  bucket_name           = "my-rds-backups"
  bucket_prefix         = "mysql/production/"
  ingestion_role        = aws_iam_role.s3_import.arn
}
The source_engine_version in the s3_import block must match the exact patch version of MySQL used to create the Xtrabackup (e.g. "8.0.43"), even though engine_version at the module level can use the major version shorthand "8.0". The RDS import process validates this version match.

IAM role requirements

RDS must be able to read objects from your S3 bucket. The ingestion role requires:
  1. A trust policy that allows rds.amazonaws.com to assume the role.
  2. A permissions policy that grants at minimum:
    • s3:ListBucket and s3:GetBucketLocation on the bucket ARN
    • s3:GetObject on all objects in the bucket (bucket-arn/*)
The example creates these resources with aws_iam_role, aws_iam_role_policy, and supporting data sources:
data "aws_iam_policy_document" "s3_import_assume" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type        = "Service"
      identifiers = ["rds.amazonaws.com"]
    }
  }
}

data "aws_iam_policy_document" "s3_import" {
  statement {
    actions   = ["s3:ListBucket", "s3:GetBucketLocation"]
    resources = [aws_s3_bucket.backup.arn]
  }
  statement {
    actions   = ["s3:GetObject"]
    resources = ["${aws_s3_bucket.backup.arn}/*"]
  }
}

Ensuring backups are uploaded before RDS creation

The example uses a local-exec provisioner on the aws_iam_role_policy resource to upload the backup before the RDS instance is created. Because the module’s s3_import block references ingestion_role = aws_iam_role.s3_import.arn, Terraform will only create the RDS instance after the IAM role policy (and its provisioner) completes:
resource "aws_iam_role_policy" "s3_import" {
  # ...
  provisioner "local-exec" {
    command = "unzip backup.zip && aws s3 sync ${path.module}/backup s3://${module.import_s3_bucket.s3_bucket_id}"
  }
}
This dependency ordering ensures the Xtrabackup files are present in S3 before RDS attempts to read them during instance creation.

Outputs

OutputDescription
db_instance_addressDNS hostname of the RDS instance
db_instance_endpointFull connection endpoint including port
db_instance_identifierThe RDS instance identifier
db_instance_engine_version_actualThe resolved engine version running
db_instance_portDatabase port (3306)
db_instance_nameThe database name
db_instance_usernameMaster username (sensitive)
db_instance_master_user_secret_arnARN of the Secrets Manager secret
db_parameter_group_idThe parameter group name
db_parameter_group_arnARN of the parameter group
db_instance_cloudwatch_log_groupsMap of CloudWatch log group names and ARNs

Build docs developers (and LLMs) love