Overview
Kuest Prediction Market stores user-generated assets like profile images in cloud storage. The application supports two storage providers:- Supabase Storage: Integrated with Supabase database, automatic bucket setup
- S3-Compatible Storage: AWS S3, Cloudflare R2, MinIO, or any S3-compatible service
Storage Provider Selection
The storage provider is automatically selected in this order:src/lib/storage.ts
Supabase Storage Setup
Prerequisites
- Supabase project created
- Database migrations applied (
npm run db:push)
Step 1: Configure Environment Variables
.env
Step 2: Automatic Bucket Creation
When you runnpm run db:push, the migration script automatically:
- Creates the
kuest-assetsbucket - Sets up public read access policy
- Configures service role full access policy
- Sets bucket limits:
- Max file size: 2 MB
- Allowed types: image/jpeg, image/png, image/webp
migrations/2025_08_28_003_buckets.sql
Step 3: Verify Setup
- Go to your Supabase dashboard
- Navigate to Storage → Buckets
- Verify
kuest-assetsbucket exists and is public
File Access URLs
Files are automatically accessible via:getPublicAssetUrl() function automatically generates these URLs:
S3-Compatible Storage Setup
Supported Providers
AWS S3
Industry standard object storage
Cloudflare R2
Zero egress fees, S3-compatible API
MinIO
Self-hosted S3-compatible storage
DigitalOcean Spaces
Simple S3-compatible storage
AWS S3 Configuration
Step 1: Create an S3 Bucket
- Go to AWS S3 Console
- Click Create bucket
- Configure:
- Bucket name:
kuest-assets(or your choice) - Region: Choose closest to your users
- Block Public Access: Disable (for public assets)
- Bucket Versioning: Optional
- Bucket name:
Step 2: Create IAM User
- Go to IAM Console
- Create a new user with Programmatic access
- Attach this policy:
Step 3: Configure Environment Variables
.env
For AWS S3, do not set
S3_ENDPOINT and S3_PUBLIC_URL. The application automatically generates the correct AWS URLs.Cloudflare R2 Configuration
Step 1: Create an R2 Bucket
- Go to Cloudflare dashboard → R2
- Click Create bucket
- Choose a name:
kuest-assets - Select location hint for performance
Step 2: Generate API Token
- Go to R2 → Manage R2 API Tokens
- Create a new API token with:
- Permissions: Object Read & Write
- Bucket: kuest-assets
- Copy the Access Key ID and Secret Access Key
Step 3: Configure Environment Variables
.env
MinIO Configuration (Self-Hosted)
Step 1: Deploy MinIO
Using Docker:Step 2: Create Bucket
- Access MinIO console at
http://localhost:9001 - Create a bucket named
kuest-assets - Set bucket policy to public read
Step 3: Create Access Keys
- Go to Identity → Users
- Create a new user or use root credentials
- Generate access keys
Step 4: Configure Environment Variables
.env
DigitalOcean Spaces Configuration
Step 1: Create a Space
- Go to DigitalOcean Spaces
- Click Create Space
- Configure:
- Name: kuest-assets
- Region: Choose closest to users
- File Listing: Enable public
Step 2: Generate API Keys
- Go to API → Spaces access keys
- Click Generate New Key
- Copy the key and secret
Step 3: Configure Environment Variables
.env
Configuration Options
S3_FORCE_PATH_STYLE
Controls URL format for accessing files:Path-Style URLs (S3_FORCE_PATH_STYLE=true)
Path-Style URLs (S3_FORCE_PATH_STYLE=true)
- Using MinIO or self-hosted S3
- Using S3-compatible services with custom domains
- Default when
S3_ENDPOINTis set
Virtual-Hosted-Style URLs (S3_FORCE_PATH_STYLE=false)
Virtual-Hosted-Style URLs (S3_FORCE_PATH_STYLE=false)
- Using AWS S3 directly
- Using DigitalOcean Spaces
- Provider requires subdomain format
S3_PUBLIC_URL
Override the automatically generated public URL:.env
- CDN in front of storage (CloudFront, Cloudflare CDN)
- Custom domain for branding
- Load balancer or proxy
Upload Configuration
The application handles file uploads with these settings:src/lib/storage.ts
Default Upload Limits
- Max file size: 2 MB (enforced by Supabase bucket, configure on S3)
- Allowed formats: JPEG, PNG, WebP
- Cache control: Defaults to 1 year for profile images
Example Upload
Troubleshooting
Error: Storage provider is not configured
Error: Storage provider is not configured
No storage configuration is set. Choose one:
- Supabase: Set
SUPABASE_URLandSUPABASE_SERVICE_ROLE_KEY - S3: Set
S3_BUCKET,S3_ACCESS_KEY_ID, andS3_SECRET_ACCESS_KEY
Error: S3 configuration is incomplete. Missing env vars: ...
Error: S3 configuration is incomplete. Missing env vars: ...
You’ve set some S3 variables but not all required ones. Minimum required:
S3_BUCKETS3_ACCESS_KEY_IDS3_SECRET_ACCESS_KEY
S3_ENDPOINT, S3_REGION, S3_PUBLIC_URL, S3_FORCE_PATH_STYLEUpload fails with 'Access Denied'
Upload fails with 'Access Denied'
Check your credentials and permissions:Supabase:
- Verify
SUPABASE_SERVICE_ROLE_KEYis correct - Check bucket policies in Supabase dashboard
- Ensure bucket is public or has correct RLS policies
- Verify IAM user has
s3:PutObjectpermission - Check bucket policy allows public read
- Test credentials with AWS CLI:
aws s3 ls s3://bucket-name/
Images return 404
Images return 404
Check the public URL configuration:Supabase:
- Bucket must be public
- URL format:
https://{project}.supabase.co/storage/v1/object/public/{bucket}/{path}
- Bucket must have public read policy
- Check
S3_PUBLIC_URLif using CDN - Verify
S3_FORCE_PATH_STYLEmatches your provider requirements
CORS errors when uploading
CORS errors when uploading
Configure CORS on your storage provider:AWS S3:Supabase: CORS is automatically configured for storage APIMinIO: Set CORS via
mc admin config set myminio api cors_allowed_origins="https://yourdomain.com"Storage costs are too high
Storage costs are too high
Optimize costs:
- Use Cloudflare R2 for zero egress fees
- Implement image optimization and compression
- Set up lifecycle policies to delete old files
- Use CDN caching to reduce storage requests
- Restrict upload sizes and formats
- AWS S3: CloudWatch metrics
- Supabase: Dashboard storage tab
- R2: Cloudflare analytics
Best Practices
Security
- Never expose service keys: Keep
SUPABASE_SERVICE_ROLE_KEYandS3_SECRET_ACCESS_KEYserver-side only - Use presigned URLs: For direct uploads from client to S3 (future feature)
- Validate file types: Check MIME types and file signatures
- Scan uploads: Consider virus scanning for user uploads
- Rate limit uploads: Prevent abuse
Performance
- Use CDN: Put CloudFront, Cloudflare, or Fastly in front of storage
- Optimize images: Compress before upload, use WebP format
- Cache aggressively: Set long cache headers for immutable assets
- Regional buckets: Store close to users for lower latency
- Lazy load images: Don’t load all assets on page load
Cost Optimization
- Choose right provider: R2 for high bandwidth, S3 for features
- Lifecycle policies: Auto-delete temporary files
- Compression: Use WebP instead of PNG/JPEG
- Image resizing: Store only necessary sizes
- Monitor usage: Set up billing alerts
Next Steps
Authentication
Configure Better Auth and wallet connections
Environment Variables
Complete variable reference
Database Setup
PostgreSQL configuration guide
Deploy to Vercel
Deploy your application