Skip to main content

File Storage Configuration

LibreChat supports multiple file storage strategies for avatars, images, and documents. You can use a single strategy for all files or configure granular strategies for different file types.

Storage Strategies

LibreChat supports four storage backends:

Local Storage

Store files on the server’s local filesystem. Simple but not suitable for distributed deployments.

AWS S3

Store files in Amazon S3 or S3-compatible services (MinIO, Hetzner, Backblaze B2).

Firebase Storage

Use Google Firebase Cloud Storage with automatic optimization.

Azure Blob Storage

Store files in Microsoft Azure Blob Storage.

Configuration Methods

Use one storage strategy for all file types:
librechat.yaml
fileStrategy: "s3"
Options: "local", "s3", "firebase", "azure_blob"
Why use granular strategies?Mix and match based on your needs:
  • Use S3 for avatars for fast global CDN access
  • Use Firebase for images with automatic optimization and resizing
  • Use local for documents for privacy/compliance requirements

Local Storage

Stores files on the server’s filesystem.

Configuration

librechat.yaml
fileStrategy: "local"
# Or granular:
fileStrategy:
  avatar: "local"
  image: "local"
  document: "local"

File Locations

Files are stored in the following directories:
/path/to/librechat/
  client/
    public/
      images/           # Generated images
  api/
    data/
      uploads/          # Document uploads
      avatars/          # User/agent avatars

Pros and Cons

Advantages

  • No external dependencies
  • Simple setup
  • Free
  • Fast for single-server deployments
  • Full data control

Disadvantages

  • Not suitable for distributed/scaled deployments
  • No automatic backups
  • Storage limited by server disk space
  • Files lost if container/server is destroyed
Production Considerations:For Docker deployments, mount volumes to persist files:
docker-compose.yml
volumes:
  - ./uploads:/app/api/data/uploads
  - ./avatars:/app/api/data/avatars
  - ./images:/app/client/public/images

AWS S3

Store files in Amazon S3 or S3-compatible services.

Environment Variables

AWS_ACCESS_KEY_ID
string
required
AWS access key ID
AWS_ACCESS_KEY_ID=AKIAXXXXX
AWS_SECRET_ACCESS_KEY
string
required
AWS secret access key
AWS_SECRET_ACCESS_KEY=xxxxx
AWS_REGION
string
required
AWS region for the S3 bucket
AWS_REGION=us-east-1
AWS_BUCKET_NAME
string
required
S3 bucket name
AWS_BUCKET_NAME=librechat-files
AWS_ENDPOINT_URL
string
Custom S3 endpoint URL (for S3-compatible services)
# MinIO:
AWS_ENDPOINT_URL=https://minio.example.com

# Hetzner:
AWS_ENDPOINT_URL=https://fsn1.your-objectstorage.com

# Backblaze B2:
AWS_ENDPOINT_URL=https://s3.us-west-002.backblazeb2.com

# DigitalOcean Spaces:
AWS_ENDPOINT_URL=https://nyc3.digitaloceanspaces.com
AWS_FORCE_PATH_STYLE
boolean
default:"false"
Use path-style URLs instead of virtual-hosted-style
Required for S3-compatible providers (MinIO, Hetzner, Backblaze B2, etc.) that don’t support virtual-hosted-style URLs. Not needed for AWS S3.
# For MinIO, Hetzner, Backblaze, etc.:
AWS_FORCE_PATH_STYLE=true

# AWS S3:
AWS_FORCE_PATH_STYLE=false

YAML Configuration

librechat.yaml
fileStrategy: "s3"
# Or granular:
fileStrategy:
  avatar: "s3"
  image: "s3"
  document: "s3"

S3 Bucket Setup

1

Create S3 Bucket

Create a new S3 bucket in your AWS account or S3-compatible service
2

Configure CORS

Add CORS policy to allow uploads from your domain:
[
  {
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
    "AllowedOrigins": ["https://yourdomain.com"],
    "ExposeHeaders": ["ETag"]
  }
]
3

Set Bucket Policy

Configure bucket policy for public read access (if needed):
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::librechat-files/*"
    }
  ]
}
4

Create IAM User (AWS)

For AWS S3, create an IAM user with S3 permissions:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::librechat-files",
        "arn:aws:s3:::librechat-files/*"
      ]
    }
  ]
}

S3-Compatible Services

.env
AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=minioadmin
AWS_REGION=us-east-1
AWS_BUCKET_NAME=librechat
AWS_ENDPOINT_URL=http://minio:9000
AWS_FORCE_PATH_STYLE=true
.env
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
AWS_REGION=fsn1
AWS_BUCKET_NAME=librechat-files
AWS_ENDPOINT_URL=https://fsn1.your-objectstorage.com
AWS_FORCE_PATH_STYLE=true
.env
AWS_ACCESS_KEY_ID=your-key-id
AWS_SECRET_ACCESS_KEY=your-application-key
AWS_REGION=us-west-002
AWS_BUCKET_NAME=librechat-files
AWS_ENDPOINT_URL=https://s3.us-west-002.backblazeb2.com
AWS_FORCE_PATH_STYLE=true
.env
AWS_ACCESS_KEY_ID=your-spaces-key
AWS_SECRET_ACCESS_KEY=your-spaces-secret
AWS_REGION=nyc3
AWS_BUCKET_NAME=librechat-files
AWS_ENDPOINT_URL=https://nyc3.digitaloceanspaces.com
AWS_FORCE_PATH_STYLE=false

Firebase Storage

Use Google Firebase Cloud Storage with automatic optimization.

Environment Variables

FIREBASE_API_KEY
string
required
Firebase API key
FIREBASE_API_KEY=AIzaSyXXX
FIREBASE_AUTH_DOMAIN
string
required
Firebase auth domain
FIREBASE_AUTH_DOMAIN=your-project.firebaseapp.com
FIREBASE_PROJECT_ID
string
required
Firebase project ID
FIREBASE_PROJECT_ID=your-project-id
FIREBASE_STORAGE_BUCKET
string
required
Firebase storage bucket name
FIREBASE_STORAGE_BUCKET=your-project.appspot.com
FIREBASE_MESSAGING_SENDER_ID
string
required
Firebase messaging sender ID
FIREBASE_MESSAGING_SENDER_ID=123456789012
FIREBASE_APP_ID
string
required
Firebase app ID
FIREBASE_APP_ID=1:123456789012:web:xxxxx

YAML Configuration

librechat.yaml
fileStrategy: "firebase"
# Or granular:
fileStrategy:
  avatar: "firebase"
  image: "firebase"
  document: "firebase"

Firebase Setup

1

Create Firebase Project

  1. Go to Firebase Console
  2. Create a new project
  3. Enable Firebase Storage
2

Get Configuration

  1. Go to Project Settings
  2. Scroll to “Your apps” section
  3. Click “Web app” and copy the configuration values
3

Configure Storage Rules

Set up Firebase Storage security rules:
rules_version = '2';
service firebase.storage {
  match /b/{bucket}/o {
    match /{allPaths=**} {
      allow read: if true;
      allow write: if request.auth != null;
    }
  }
}

Features

  • Automatic image optimization and resizing
  • CDN delivery for fast global access
  • Built-in security rules
  • Generous free tier (5GB storage, 1GB/day downloads)
  • Automatic backups

Azure Blob Storage

Store files in Microsoft Azure Blob Storage.

Environment Variables

AZURE_STORAGE_CONNECTION_STRING
string
required
Azure storage account connection string
AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=xxxxx;EndpointSuffix=core.windows.net"
AZURE_CONTAINER_NAME
string
default:"files"
Azure blob container name
AZURE_CONTAINER_NAME=files
AZURE_STORAGE_PUBLIC_ACCESS
boolean
default:"false"
Enable public access to blobs
AZURE_STORAGE_PUBLIC_ACCESS=false

YAML Configuration

librechat.yaml
fileStrategy: "azure_blob"
# Or granular:
fileStrategy:
  avatar: "azure_blob"
  image: "azure_blob"
  document: "azure_blob"

Azure Setup

1

Create Storage Account

  1. Go to Azure Portal
  2. Create a new Storage Account
  3. Choose performance and replication options
2

Create Container

  1. In your storage account, go to Containers
  2. Create a new container named files
  3. Set access level (Private or Blob)
3

Get Connection String

  1. Go to Access keys in your storage account
  2. Copy the connection string from key1 or key2
4

Configure CORS

Add CORS rules for your domain:
<CorsRule>
  <AllowedOrigins>https://yourdomain.com</AllowedOrigins>
  <AllowedMethods>GET,PUT,POST,DELETE</AllowedMethods>
  <AllowedHeaders>*</AllowedHeaders>
  <ExposedHeaders>*</ExposedHeaders>
  <MaxAgeInSeconds>3600</MaxAgeInSeconds>
</CorsRule>

File Upload Configuration

Configure file upload limits per endpoint in librechat.yaml:
librechat.yaml
fileConfig:
  endpoints:
    assistants:
      fileLimit: 5                    # Max number of files
      fileSizeLimit: 10               # Max size per file (MB)
      totalSizeLimit: 50              # Max total size (MB)
      supportedMimeTypes:
        - "image/.*"
        - "application/pdf"
    
    openAI:
      disabled: true                  # Disable file uploads
    
    default:
      totalSizeLimit: 20
    
    YourCustomEndpointName:
      fileLimit: 2
      fileSizeLimit: 5
  
  # Global settings
  serverFileSizeLimit: 100            # Global server limit (MB)
  avatarSizeLimit: 2                  # Avatar limit (MB)
  
  # Image generation sizing
  imageGeneration:
    percentage: 100                   # Percentage-based sizing
    # OR
    # px: 1024                        # Pixel-based sizing
  
  # Client-side image resizing
  clientImageResize:
    enabled: false                    # Enable client-side resizing
    maxWidth: 1900                    # Max width (px)
    maxHeight: 1900                   # Max height (px)
    quality: 0.92                     # JPEG quality (0.0-1.0)

Migrating Between Storage Strategies

When changing storage strategies, existing files won’t be automatically migrated.
Migration Considerations:
  • Files uploaded with one strategy remain in that storage location
  • Users may see broken images/avatars after switching strategies
  • Manual migration is required to move existing files

Migration Steps

1

Set New Strategy

Update fileStrategy in librechat.yaml to your new storage backend
2

Export Existing Files

Download all files from your current storage:
# For local storage:
tar -czf files-backup.tar.gz \
  ./api/data/uploads \
  ./api/data/avatars \
  ./client/public/images

# For S3:
aws s3 sync s3://your-bucket ./files-backup
3

Upload to New Storage

Upload files to your new storage backend using its API or CLI
4

Update Database References

Update file references in MongoDB if paths have changed

Performance Optimization

CDN Integration

For production deployments, consider using a CDN:
  1. Create CloudFront distribution pointing to S3 bucket
  2. Configure origin access identity
  3. Update S3 bucket policy
  4. Use CloudFront URL for file access

Caching Headers

Configure static file caching:
.env
# Cache static files for 2 days
STATIC_CACHE_MAX_AGE=172800

# CDN cache for 1 day
STATIC_CACHE_S_MAX_AGE=86400

Image Optimization

Enable client-side image resizing to reduce upload sizes:
librechat.yaml
fileConfig:
  clientImageResize:
    enabled: true
    maxWidth: 1900
    maxHeight: 1900
    quality: 0.92

Comparison

FeatureLocalS3FirebaseAzure Blob
Setup ComplexityEasyMediumMediumMedium
CostFreePay-per-useFree tier + payPay-per-use
ScalabilityLimitedExcellentExcellentExcellent
CDNNoOptionalBuilt-inOptional
Automatic OptimizationNoNoYesNo
Best ForDevelopmentProductionProductionEnterprise
Distributed DeploymentsNoYesYesYes

Recommendations

Development

Use local storage for simplicity:
fileStrategy: "local"

Small Production

Use Firebase for automatic optimization:
fileStrategy: "firebase"

Enterprise

Use S3 or Azure based on your cloud provider:
fileStrategy: "s3"

Hybrid

Use granular strategies for optimization:
fileStrategy:
  avatar: "s3"
  image: "firebase"
  document: "local"

Troubleshooting

  • Verify storage credentials are correct
  • Check bucket/container permissions
  • Ensure CORS is configured correctly
  • Review file size limits
  • Check network connectivity to storage service
  • Verify files were uploaded successfully
  • Check bucket/container public access settings
  • Ensure CORS allows your domain
  • Verify file URLs are accessible
  • Check browser console for CORS errors
  • Set AWS_FORCE_PATH_STYLE=true for most S3-compatible services
  • Verify endpoint URL is correct
  • Check if service supports virtual-hosted-style URLs
  • Test credentials with S3 CLI tools
  • Enable CDN for faster delivery
  • Configure caching headers
  • Enable client-side image resizing
  • Consider using granular strategies
  • Review file size limits

Next Steps

AI Endpoints

Configure AI providers

Authentication

Set up user authentication

Environment Variables

Complete environment reference

YAML Configuration

Advanced configuration options

Build docs developers (and LLMs) love