Aurora uses S3-compatible object storage for files, uploads, and artifacts. The storage layer is pluggable and supports multiple backends.
Supported Backends
- SeaweedFS (default) - Apache 2.0 licensed, included in docker-compose
- AWS S3 - Amazon’s object storage service
- Cloudflare R2 - S3-compatible storage with zero egress fees
- Backblaze B2 - Affordable S3-compatible storage
- Google Cloud Storage - Via S3 interoperability
- MinIO - Self-hosted S3-compatible storage
- Any S3-compatible service
Configuration
Environment Variables
STORAGE_BUCKET
string
default:"aurora-storage"
required
S3 bucket name. Must exist before starting Aurora.
STORAGE_ENDPOINT_URL
string
default:"http://seaweedfs-filer:8333"
S3 endpoint URL.
- AWS S3: Omit this variable (uses AWS SDK defaults)
- SeaweedFS:
http://seaweedfs-filer:8333 (Docker) or http://localhost:8333 (local)
- Cloudflare R2:
https://<account_id>.r2.cloudflarestorage.com
- MinIO:
http://minio:9000
STORAGE_ACCESS_KEY
string
default:"admin"
required
S3 access key ID
STORAGE_SECRET_KEY
string
default:"admin"
required
S3 secret access key
STORAGE_REGION
string
default:"us-east-1"
S3 region. Use auto for Cloudflare R2.
Enable SSL/TLS for storage connections. Set to true for production.
Verify SSL certificates. Set to true in production.Disabling SSL verification is vulnerable to MITM attacks. Only disable for local development or trusted networks.
Enable Redis caching for file listings
SeaweedFS (Default)
SeaweedFS is a distributed object storage system included in Aurora’s docker-compose stack.
Features
- Apache 2.0 licensed - No vendor lock-in
- S3-compatible API - Drop-in replacement for AWS S3
- High performance - Built for speed
- Included in docker-compose - No external dependencies
Configuration
# .env
STORAGE_BUCKET=aurora-storage
STORAGE_ENDPOINT_URL=http://seaweedfs-filer:8333
STORAGE_ACCESS_KEY=admin
STORAGE_SECRET_KEY=admin
STORAGE_REGION=us-east-1
STORAGE_USE_SSL=false
STORAGE_VERIFY_SSL=false
Web UI
SeaweedFS provides web interfaces:
Creating the Bucket
The bucket is created automatically on first access. To create manually:
# Using AWS CLI
aws s3 mb s3://aurora-storage \
--endpoint-url http://localhost:8333
# Using curl
curl -X PUT http://localhost:8333/aurora-storage
AWS S3
Use AWS S3 for production deployments with AWS infrastructure.
Configuration
# .env
STORAGE_BUCKET=my-aurora-bucket
STORAGE_ENDPOINT_URL= # Omit for AWS S3 (uses SDK defaults)
STORAGE_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE
STORAGE_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
STORAGE_REGION=us-west-2
STORAGE_USE_SSL=true
STORAGE_VERIFY_SSL=true
Creating the Bucket
# Using AWS CLI
aws s3 mb s3://my-aurora-bucket --region us-west-2
# Enable versioning (optional)
aws s3api put-bucket-versioning \
--bucket my-aurora-bucket \
--versioning-configuration Status=Enabled
# Configure lifecycle rules (optional)
aws s3api put-bucket-lifecycle-configuration \
--bucket my-aurora-bucket \
--lifecycle-configuration file://lifecycle.json
IAM Permissions
Create an IAM user with the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::my-aurora-bucket",
"arn:aws:s3:::my-aurora-bucket/*"
]
}
]
}
Cost Optimization
- Use S3 Intelligent-Tiering for automatic cost optimization
- Enable lifecycle policies to delete old files
- Use S3 Transfer Acceleration for faster uploads (optional)
Cloudflare R2
Cloudflare R2 offers S3-compatible storage with zero egress fees.
Configuration
# .env
STORAGE_BUCKET=my-aurora-bucket
STORAGE_ENDPOINT_URL=https://<account_id>.r2.cloudflarestorage.com
STORAGE_ACCESS_KEY=<r2_access_key>
STORAGE_SECRET_KEY=<r2_secret_key>
STORAGE_REGION=auto
STORAGE_USE_SSL=true
STORAGE_VERIFY_SSL=true
Creating the Bucket
- Go to Cloudflare Dashboard → R2
- Click “Create bucket”
- Name your bucket:
my-aurora-bucket
- Create API token:
- Navigate to R2 → API Tokens
- Create API token with “Object Read & Write” permissions
- Copy the Access Key ID and Secret Access Key
Benefits
- Zero egress fees - No charges for downloads
- S3-compatible API - Works with existing S3 tools
- Global distribution - Low latency worldwide
MinIO
Self-hosted S3-compatible storage for on-premises deployments.
Configuration
# .env
STORAGE_BUCKET=aurora-storage
STORAGE_ENDPOINT_URL=http://minio:9000
STORAGE_ACCESS_KEY=minioadmin
STORAGE_SECRET_KEY=minioadmin
STORAGE_REGION=us-east-1
STORAGE_USE_SSL=false
STORAGE_VERIFY_SSL=false
Docker Compose
Add MinIO to your docker-compose:
services:
minio:
image: minio/minio:latest
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
command: server /data --console-address ":9001"
volumes:
- minio-data:/data
volumes:
minio-data:
Web UI
Access MinIO console at http://localhost:9001
Python API
Basic Usage
from utils.storage.storage import get_storage_manager
# Get storage manager for a user
storage = get_storage_manager(user_id="user123")
# Upload a file
with open("local_file.txt", "rb") as f:
uri = storage.upload_file(f, "path/to/file.txt")
# Returns: s3://aurora-storage/users/user123/path/to/file.txt
# Download a file
data = storage.download_bytes("path/to/file.txt")
# List files
files = storage.list_user_files(prefix="uploads/")
for file in files:
print(f"{file['name']}: {file['size']} bytes")
# Delete a file
storage.delete_file("path/to/file.txt")
# Generate presigned URL (for direct browser access)
url = storage.generate_presigned_url(
"path/to/file.txt",
expiration=3600 # 1 hour
)
Upload from Flask Request
from flask import request
from utils.storage.storage import get_storage_manager
@app.route("/upload", methods=["POST"])
def upload_file():
file = request.files["file"]
user_id = get_current_user_id()
storage = get_storage_manager(user_id=user_id)
uri = storage.upload_file(
file,
f"uploads/{file.filename}",
content_type=file.content_type
)
return {"success": True, "uri": uri}
User-Scoped Paths
All files are automatically scoped to users:
storage = get_storage_manager(user_id="user123")
# Upload to "report.pdf"
storage.upload_file(file, "report.pdf")
# Actual S3 path: users/user123/report.pdf
# List user's files
files = storage.list_user_files()
# Returns only files under users/user123/
Caching
File listings are cached in Redis:
# First call: fetches from S3, caches result
files = storage.list_user_files(prefix="uploads/")
# Subsequent calls: returns cached result (faster)
files = storage.list_user_files(prefix="uploads/")
# Cache is invalidated on upload/delete
storage.upload_file(file, "uploads/new.txt") # Invalidates cache
File Organization
Recommended Structure
aurora-storage/
└── users/
└── {user_id}/
├── uploads/ # User file uploads
├── terraform_dir/ # Terraform state files
├── {session_id}/ # Session-specific files
└── artifacts/ # Build artifacts
Path Traversal Protection
The storage manager prevents directory traversal attacks:
storage = get_storage_manager(user_id="user123")
# These are rejected:
storage.upload_file(file, "../../../etc/passwd") # ValueError
storage.upload_file(file, "/../other_user/file") # ValueError
# Valid paths:
storage.upload_file(file, "uploads/file.txt") # OK
storage.upload_file(file, "folder/subfolder/a.txt") # OK
Troubleshooting
Connection Errors
# Check storage service is running
docker ps | grep seaweedfs # or minio, etc.
# Test connection from aurora-server
docker exec aurora-server-1 wget -qO- http://seaweedfs-filer:8333
# Check DNS resolution
docker exec aurora-server-1 nslookup seaweedfs-filer
Bucket Not Found
# List buckets
aws s3 ls --endpoint-url http://localhost:8333
# Create bucket
aws s3 mb s3://aurora-storage --endpoint-url http://localhost:8333
# Or create via curl
curl -X PUT http://localhost:8333/aurora-storage
Permission Denied
# Verify credentials
aws s3 ls s3://aurora-storage \
--endpoint-url http://localhost:8333 \
--no-verify-ssl
# Check SeaweedFS logs
docker logs aurora-seaweedfs-1
SSL Certificate Errors
# For development, disable SSL verification
STORAGE_VERIFY_SSL=false
# For production, ensure valid certificates or use trusted CA
STORAGE_VERIFY_SSL=true
File Size Limits
Default max file size is 100MB. To increase:
# In storage.py StorageConfig
max_file_size_mb: int = 500 # 500MB limit
Security Best Practices
Production Checklist
Encryption
# AWS S3: Enable default encryption
aws s3api put-bucket-encryption \
--bucket my-aurora-bucket \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}]
}'
Access Control
# AWS S3: Block public access
aws s3api put-public-access-block \
--bucket my-aurora-bucket \
--public-access-block-configuration \
"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
Migration
Switching Backends
To migrate from SeaweedFS to AWS S3:
-
Sync existing files:
aws s3 sync \
s3://aurora-storage \
s3://my-new-bucket \
--source-endpoint-url http://localhost:8333 \
--endpoint-url https://s3.amazonaws.com
-
Update
.env:
STORAGE_BUCKET=my-new-bucket
STORAGE_ENDPOINT_URL= # Omit for AWS
STORAGE_ACCESS_KEY=<aws_key>
STORAGE_SECRET_KEY=<aws_secret>
STORAGE_REGION=us-west-2
STORAGE_USE_SSL=true
STORAGE_VERIFY_SSL=true
-
Restart Aurora:
make down
make prod-prebuilt
-
Verify migration:
# Check file count
aws s3 ls s3://my-new-bucket --recursive | wc -l