File Storage Configuration
LobeHub uses S3-compatible object storage to store user-uploaded files, images, documents, and avatars. This enables scalable, cost-effective file management for your deployment.Supported Storage Providers
LobeHub supports any S3-compatible object storage service:- AWS S3 - Amazon’s object storage
- Cloudflare R2 - Zero egress fees, S3-compatible
- MinIO - Self-hosted S3-compatible storage
- Backblaze B2 - Affordable S3-compatible storage
- DigitalOcean Spaces - S3-compatible object storage
- Wasabi - Hot cloud storage
- Alibaba Cloud OSS - With S3-compatible API
- Any other S3-compatible service
Configuration
Required Environment Variables
S3 access key ID for authentication
S3 secret access key for authentication
S3 endpoint URLExamples:
- AWS S3:
https://s3.us-west-2.amazonaws.com - Cloudflare R2:
https://account-id.r2.cloudflarestorage.com - MinIO:
https://minio.yourdomain.com
S3 bucket name for storing files
Create the bucket before deploying LobeHub
Optional Environment Variables
S3 bucket region (e.g.,
us-west-2, auto)Required for AWS S3. May be optional for other providers.Public domain for accessing filesExamples:
- AWS S3:
https://bucket-name.s3.us-west-2.amazonaws.com - Cloudflare R2 with custom domain:
https://cdn.yourdomain.com - MinIO:
https://minio.yourdomain.com
Enable path-style S3 URLs instead of virtual-hosted-styleSet to
Virtual-hosted:
1 for:- MinIO
- Some self-hosted S3 implementations
- When using IP addresses instead of domains
https://endpoint/bucket/keyVirtual-hosted:
https://bucket.endpoint/keySet ACL (Access Control List) when uploading filesSet to
0 if:- Your bucket has a public read policy
- Your S3 provider doesn’t support ACLs
- Using Cloudflare R2 (doesn’t support ACLs)
Pre-signed URL expiration time in seconds (default: 2 hours)Only used when
S3_SET_ACL=0 or bucket is not public.Path prefix for storing files within the bucketFiles will be stored at:
{S3_BUCKET}/{NEXT_PUBLIC_S3_FILE_PATH}/...Provider-Specific Setup
AWS S3
- Create S3 bucket in AWS Console
- Create IAM user with S3 permissions:
- Configure environment variables:
Cloudflare R2
- Create R2 bucket in Cloudflare Dashboard
- Create API token with R2 permissions
- Configure environment variables:
Cloudflare R2 does not support ACLs. Set
S3_SET_ACL=0 and use pre-signed URLs or custom domains.MinIO (Self-Hosted)
- Install MinIO server:
- Create bucket via MinIO Console (http://localhost:9001)
- Create access key and secret
- Configure environment variables:
Backblaze B2
- Create B2 bucket in Backblaze Console
- Create application key with read/write permissions
- Configure environment variables:
File Storage Architecture
File Deduplication
LobeHub implements content-based file deduplication using SHA256 hashes:- File uploaded by user → Calculate SHA256 hash
- Check if hash exists in
global_filestable - If exists, create reference in
filestable (no upload) - If new, upload to S3 and create entries in both tables
Storage Tables
files - User file references
- Links user to files
- Stores file metadata (name, size, type)
- References
global_files.hash_idfor actual file location
global_files - Deduplicated file storage
- Stores unique files by SHA256 hash
- Tracks S3 URL/key for each unique file
- Reference counted (deleted when no
filesreference it)
documents - Parsed document content
- Stores extracted text from files
- Links to
filesviafile_id - Used for RAG and knowledge bases
File Types Supported
LobeHub handles various file types:- Images: PNG, JPG, WEBP, GIF (vision models)
- Documents: PDF, TXT, MD, DOCX (RAG/knowledge bases)
- Archives: ZIP (document collections)
- Code: Various programming languages
- Other: Any file type for storage
File Access Patterns
Public Bucket (ACL Enabled)
WhenS3_SET_ACL=1 and bucket allows public read:
- Files uploaded with public-read ACL
- Direct access via
S3_PUBLIC_DOMAIN/key - No expiration on URLs
- Faster access (no signature verification)
Private Bucket (Pre-signed URLs)
WhenS3_SET_ACL=0 or bucket is private:
- Files uploaded without public ACL
- Access via pre-signed URLs
- URLs expire after
S3_PREVIEW_URL_EXPIRE_INseconds - More secure (temporary access)
Storage Limits
Set to
1 to disable automatic deletion of unreferenced files from S3Useful for:- Debugging storage issues
- Preserving files for recovery
- Custom file lifecycle policies
Troubleshooting
Upload Failures
Error: “Access Denied”- Check S3 credentials are correct
- Verify IAM user/API token has write permissions
- Ensure bucket name is correct
- Verify bucket exists
- Check bucket name spelling
- Ensure region is correct (for AWS S3)
- Verify
S3_ENDPOINTURL format - Ensure endpoint is accessible from server
- Check for HTTPS/HTTP mismatch
Access Issues
Files not accessible after upload WhenS3_SET_ACL=1:
- Verify bucket has public read policy or ACLs enabled
- Check
S3_PUBLIC_DOMAINis correct
S3_SET_ACL=0:
- Pre-signed URLs should work automatically
- Check
S3_PREVIEW_URL_EXPIRE_INis not too short
Performance Issues
- Use CDN (CloudFront, Cloudflare) in front of S3
- Enable browser caching with appropriate headers
- Consider geographic distribution of buckets
- Use S3 transfer acceleration (AWS S3)
Security Best Practices
Next Steps
- Database Configuration - Set up PostgreSQL for file metadata
- Authentication - Configure user access control
- Observability - Monitor storage usage and performance