Prerequisites
- An S3-compatible storage bucket
- Access credentials (access key ID and secret access key)
- Bucket name and region information
Configuration
Set the datasource type
Configure Zipline to use S3 storage by setting the
DATASOURCE_TYPE environment variable:Configure S3 credentials
Add your S3 credentials and bucket information to your
.env file:Zipline performs read/write access tests on startup. If the credentials are invalid or the bucket is inaccessible, Zipline will exit with an error message.
(Optional) Configure custom endpoint
For S3-compatible providers other than AWS (MinIO, Backblaze B2, DigitalOcean Spaces, etc.), specify a custom endpoint:For MinIO or other providers that require path-style URLs:
Remove local storage volume (Docker only)
If using Docker, you can remove the local uploads volume from your
docker-compose.yml since files are stored in S3:Provider-Specific Examples
AWS S3
AWS S3
Ensure your IAM user has
s3:PutObject, s3:GetObject, and s3:DeleteObject permissions for the bucket.MinIO
MinIO
Backblaze B2
Backblaze B2
Use your B2 application key ID and key, not your account credentials. The region should match your bucket’s region.
DigitalOcean Spaces
DigitalOcean Spaces
Cloudflare R2
Cloudflare R2
Bucket Permissions
Your S3 bucket policy or IAM user must have the following permissions:s3:PutObject- Upload filess3:GetObject- Download/serve filess3:DeleteObject- Delete filess3:ListBucket- List files (for metrics)s3:HeadObject- Get file metadata
Example IAM Policy (AWS S3)
Migration from Local Storage
If you’re migrating from local storage to S3:Troubleshooting
Zipline exits with 'error while testing write access'
Zipline exits with 'error while testing write access'
This means Zipline cannot write to your bucket. Check:
- Credentials are correct and not expired
- Bucket name and region are correct
- IAM user/role has
s3:PutObjectpermission - Bucket policy doesn’t deny write access
- Endpoint URL is correct (for non-AWS providers)
Files upload but return 404 when accessed
Files upload but return 404 when accessed
This usually means:
- The bucket or objects are not publicly accessible (this is expected - Zipline proxies requests)
- Check your
CORE_DEFAULT_DOMAINconfiguration if using custom domains - Verify file was actually uploaded to S3
'Connection timeout' errors
'Connection timeout' errors
Connection timeouts are configured in the S3 client:
- Connection timeout: 10 seconds (~/workspace/source/src/lib/datasource/S3.ts:55)
- Socket timeout: 120 seconds (~/workspace/source/src/lib/datasource/S3.ts:56)
- Network connectivity to your S3 endpoint
- Firewall rules allowing outbound HTTPS traffic
- DNS resolution for your endpoint
Large file uploads fail
Large file uploads fail
For files larger than 5GB, Zipline automatically uses multipart upload (~/workspace/source/src/lib/datasource/S3.ts:332-416).If large uploads fail:
- Ensure your bucket supports multipart uploads
- Check that
s3:AbortMultipartUploadands3:ListMultipartUploadPartspermissions are granted - Verify your upload size limits in Zipline config (
FILES_MAX_FILE_SIZE)
Performance Considerations
- Connection pooling: Zipline maintains up to 1000 concurrent connections to S3 (~/workspace/source/src/lib/datasource/S3.ts:58-64)
- Keep-alive: HTTP connections use keep-alive to reduce latency
- Multipart uploads: Files over 5GB automatically use multipart upload in 5MB chunks
- Regional performance: Choose a region close to your users or use a CDN