Aurora MySQL supports restoring a cluster directly from a Percona Xtrabackup stored in Amazon S3. This lets you migrate an existing MySQL database to Aurora without downtime on the source, and without requiring a logical dump/restore cycle.
S3 import is MySQL only. It is not supported for Aurora PostgreSQL. The source backup must be in Percona Xtrabackup format — standard mysqldump files are not accepted.
How it works
When s3_import is set, Aurora creates the cluster by streaming the Xtrabackup files from S3 instead of initialising an empty database. The cluster must be given an IAM role that grants RDS read access to the S3 bucket — this role is passed both via role_associations (to attach it to the cluster) and via s3_import.ingestion_role (to identify which role to use during the restore).
Prerequisites
Create a Percona Xtrabackup
On your source MySQL server, use Percona Xtrabackup to create a full backup:xtrabackup --backup --target-dir=/var/backup/full
xtrabackup --prepare --target-dir=/var/backup/full
The backup directory must be prepared (i.e. --prepare must have been run) before uploading. Upload the backup to S3
Upload the prepared backup directory to an S3 bucket. The path within the bucket becomes the bucket_prefix:aws s3 sync /var/backup/full s3://my-import-bucket/backup/
Create the IAM ingestion role
RDS needs an IAM role that it can assume to read from your S3 bucket. The role must trust the rds.amazonaws.com service principal and have s3:ListBucket, s3:GetBucketLocation, and s3:GetObject permissions on the bucket.data "aws_iam_policy_document" "s3_import_assume" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["rds.amazonaws.com"]
}
}
}
resource "aws_iam_role" "s3_import" {
name_prefix = "aurora-s3-import-"
description = "IAM role to allow RDS to import MySQL backup from S3"
assume_role_policy = data.aws_iam_policy_document.s3_import_assume.json
force_detach_policies = true
}
data "aws_iam_policy_document" "s3_import" {
statement {
actions = [
"s3:ListBucket",
"s3:GetBucketLocation",
]
resources = [aws_s3_bucket.import.arn]
}
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.import.arn}/*"]
}
}
resource "aws_iam_role_policy" "s3_import" {
name_prefix = "aurora-s3-import-"
role = aws_iam_role.s3_import.id
policy = data.aws_iam_policy_document.s3_import.json
}
Provision the Aurora cluster with s3_import
Pass the s3_import block and the role_associations to the module. Aurora reads the backup during cluster creation.module "aurora" {
source = "terraform-aws-modules/rds-aurora/aws"
name = "ex-s3-import"
engine = "aurora-mysql"
engine_version = "5.7.12"
master_username = "root"
cluster_instance_class = "db.r8g.large"
instances = { 1 = {} }
vpc_id = module.vpc.vpc_id
db_subnet_group_name = module.vpc.database_subnet_group_name
security_group_ingress_rules = {
private-az1 = {
cidr_ipv4 = element(module.vpc.private_subnets_cidr_blocks, 0)
}
private-az2 = {
cidr_ipv4 = element(module.vpc.private_subnets_cidr_blocks, 1)
}
private-az3 = {
cidr_ipv4 = element(module.vpc.private_subnets_cidr_blocks, 2)
}
}
role_associations = {
s3Import = {
role_arn = aws_iam_role.s3_import.arn
}
}
s3_import = {
source_engine_version = "5.7.12"
bucket_name = module.import_s3_bucket.s3_bucket_id
ingestion_role = aws_iam_role.s3_import.arn
}
skip_final_snapshot = true
enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
tags = {
Environment = "dev"
Terraform = "true"
}
}
Full working example
The following is the complete main.tf from the examples/s3-import directory, including the S3 bucket and IAM resources:
provider "aws" {
region = local.region
}
data "aws_availability_zones" "available" {
# Exclude local zones
filter {
name = "opt-in-status"
values = ["opt-in-not-required"]
}
}
locals {
name = "ex-s3-import"
region = "eu-west-1"
vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
tags = {
Example = local.name
Terraform = "true"
}
}
module "aurora" {
source = "terraform-aws-modules/rds-aurora/aws"
name = local.name
engine = "aurora-mysql"
engine_version = "5.7.12"
master_username = "root"
cluster_instance_class = "db.r8g.large"
instances = { 1 = {} }
vpc_id = module.vpc.vpc_id
db_subnet_group_name = module.vpc.database_subnet_group_name
security_group_ingress_rules = {
private-az1 = {
cidr_ipv4 = element(module.vpc.private_subnets_cidr_blocks, 0)
}
private-az2 = {
cidr_ipv4 = element(module.vpc.private_subnets_cidr_blocks, 1)
}
private-az3 = {
cidr_ipv4 = element(module.vpc.private_subnets_cidr_blocks, 2)
}
}
role_associations = {
s3Import = {
role_arn = aws_iam_role.s3_import.arn
}
}
# S3 import: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.LoadFromS3.html
s3_import = {
source_engine_version = "5.7.12"
bucket_name = module.import_s3_bucket.s3_bucket_id
ingestion_role = aws_iam_role.s3_import.arn
}
skip_final_snapshot = true
enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
tags = local.tags
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 6.0"
name = local.name
cidr = local.vpc_cidr
azs = local.azs
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k)]
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 3)]
database_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 6)]
tags = local.tags
}
module "import_s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "~> 5.0"
bucket_prefix = "${local.name}-"
acl = "private"
force_destroy = true
tags = local.tags
}
data "aws_iam_policy_document" "s3_import_assume" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["rds.amazonaws.com"]
}
}
}
resource "aws_iam_role" "s3_import" {
name_prefix = "${local.name}-"
description = "IAM role to allow RDS to import MySQL backup from S3"
assume_role_policy = data.aws_iam_policy_document.s3_import_assume.json
force_detach_policies = true
tags = local.tags
}
data "aws_iam_policy_document" "s3_import" {
statement {
actions = [
"s3:ListBucket",
"s3:GetBucketLocation",
]
resources = [module.import_s3_bucket.s3_bucket_arn]
}
statement {
actions = ["s3:GetObject"]
resources = ["${module.import_s3_bucket.s3_bucket_arn}/*"]
}
}
resource "aws_iam_role_policy" "s3_import" {
name_prefix = "${local.name}-"
role = aws_iam_role.s3_import.id
policy = data.aws_iam_policy_document.s3_import.json
}
The s3_import variable
| Attribute | Type | Required | Description |
|---|
bucket_name | string | yes | Name of the S3 bucket containing the Xtrabackup |
bucket_prefix | string | no | Path prefix within the bucket where the backup files are located |
ingestion_role | string | yes | ARN of the IAM role that RDS assumes to read from S3 |
source_engine_version | string | yes | MySQL version of the source database (e.g. "5.7.12") |
IAM requirements
The ingestion role must:
- Trust
rds.amazonaws.com as a service principal (allow sts:AssumeRole)
- Have
s3:ListBucket and s3:GetBucketLocation on the bucket ARN
- Have
s3:GetObject on all objects within the bucket (bucket_arn/*)
The role ARN must be passed in both places:
s3_import.ingestion_role — used during the cluster creation restore operation
role_associations — attaches the role to the cluster for ongoing association
role_associations = {
s3Import = {
role_arn = aws_iam_role.s3_import.arn
# feature_name defaults to the map key: "s3Import"
}
}
s3_import = {
source_engine_version = "5.7.12"
bucket_name = "my-import-bucket"
ingestion_role = aws_iam_role.s3_import.arn
}