Skip to main content

Overview

QFieldCloud requires two main data storage systems:
  1. PostgreSQL/PostGIS Database - Stores application data, user accounts, project metadata
  2. S3-Compatible Object Storage - Stores project files, QGIS projects, geodata, and attachments
Production Recommendation: Use externally managed services for both database and object storage. The standalone docker-compose configuration is for development/testing only.

PostgreSQL Database

Database Requirements

QFieldCloud requires:
  • PostgreSQL 12 or later
  • PostGIS extension 3.0 or later
  • User with permissions to:
    • Create and modify databases
    • Install extensions (PostGIS)
    • Create tables and indexes

Production Configuration (External Database)

For production, use a managed PostgreSQL service or dedicated database server.

Environment Configuration

# Database connection settings
POSTGRES_USER=qfieldcloud_db_admin
POSTGRES_PASSWORD=<strong-password>  # Generate with: pwgen -sn 16
POSTGRES_DB=qfieldcloud_db
POSTGRES_HOST=postgres.yourcompany.com  # Your database host
POSTGRES_PORT=5432
POSTGRES_SSLMODE=require  # Use 'require' or 'verify-full' for production

Database Setup

  1. Create Database:
CREATE DATABASE qfieldcloud_db;
CREATE USER qfieldcloud_db_admin WITH PASSWORD 'your-strong-password';
GRANT ALL PRIVILEGES ON DATABASE qfieldcloud_db TO qfieldcloud_db_admin;
  1. Enable PostGIS Extension:
\c qfieldcloud_db
CREATE EXTENSION IF NOT EXISTS postgis;
CREATE EXTENSION IF NOT EXISTS postgis_topology;
  1. Run Migrations:
docker compose exec app python manage.py migrate

Standalone Configuration (Development)

For local development, use the bundled PostgreSQL container.

Docker Compose Configuration

In docker-compose.override.standalone.yml:
services:
  db:
    image: postgis/postgis:${POSTGIS_IMAGE_VERSION}
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data/
    ports:
      - ${HOST_POSTGRES_PORT}:5432
    command: ["postgres", "-c", "log_statement=all", "-c", "log_destination=stderr"]

volumes:
  postgres_data:

Environment Configuration

# In .env
POSTGIS_IMAGE_VERSION=17-3.5-alpine
POSTGRES_HOST=db
POSTGRES_PORT=5432
HOST_POSTGRES_PORT=5433  # External port for host access

Database Access

Access from Host

Using pg_service.conf:
# Create ~/.pg_service.conf
cat > ~/.pg_service.conf << EOF
[localhost.qfield.cloud]
host=localhost
dbname=qfieldcloud_db
user=qfieldcloud_db_admin
port=5433
password=your-database-password
sslmode=disable
EOF

# Set permissions
chmod 600 ~/.pg_service.conf

# Connect
psql 'service=localhost.qfield.cloud'
Direct connection:
psql -h localhost -p 5433 -U qfieldcloud_db_admin -d qfieldcloud_db

Access from Docker

# Using docker compose
docker compose exec -it db psql -U qfieldcloud_db_admin -d qfieldcloud_db

# Using docker exec
docker exec -it qfieldcloud_db_1 psql -U qfieldcloud_db_admin -d qfieldcloud_db

Database Maintenance

Backup Database

# Backup from host
pg_dump 'service=localhost.qfield.cloud' > qfc_backup_$(date +%Y%m%d).sql

# Backup from container
docker compose exec db pg_dump -U qfieldcloud_db_admin qfieldcloud_db > qfc_backup_$(date +%Y%m%d).sql

# Compressed backup
docker compose exec db pg_dump -U qfieldcloud_db_admin -Fc qfieldcloud_db > qfc_backup_$(date +%Y%m%d).dump

Restore Database

# Restore SQL dump
psql 'service=localhost.qfield.cloud' < qfc_backup_20240304.sql

# Restore compressed dump
pg_restore -d 'service=localhost.qfield.cloud' qfc_backup_20240304.dump

# Restore in container
docker compose exec -T db psql -U qfieldcloud_db_admin qfieldcloud_db < qfc_backup_20240304.sql

Vacuum and Analyze

Regular maintenance for optimal performance:
# Full vacuum (requires downtime)
docker compose exec db psql -U qfieldcloud_db_admin -d qfieldcloud_db -c "VACUUM FULL ANALYZE;"

# Regular vacuum (no downtime)
docker compose exec db psql -U qfieldcloud_db_admin -d qfieldcloud_db -c "VACUUM ANALYZE;"

Database Migration

When upgrading PostgreSQL major versions:
Risky Operation: Always backup your data before attempting a major version upgrade!
# 1. Backup current database
pg_dump 'service=localhost.qfield.cloud' > qfc_backup_pre_upgrade.sql

# 2. Update POSTGIS_IMAGE_VERSION in .env
POSTGIS_IMAGE_VERSION=17-3.5-alpine  # New version

# 3. Stop and remove old container
docker compose down
docker volume rm qfieldcloud_postgres_data

# 4. Start new container
docker compose up -d db

# 5. Restore data
psql 'service=localhost.qfield.cloud' < qfc_backup_pre_upgrade.sql

# 6. Run migrations
docker compose exec app python manage.py migrate
For detailed PostgreSQL upgrade instructions, see: PostgreSQL Migration Guide

Object Storage

QFieldCloud supports multiple object storage backends:
  1. AWS S3 - Production-ready, fully managed
  2. MinIO - Self-hosted, S3-compatible (for standalone)
  3. WebDAV - Alternative storage backend (experimental)

Storage Configuration

Storage is configured via the STORAGES environment variable in .env.

Configuration

STORAGES='{
    "default": {
        "BACKEND": "qfieldcloud.filestorage.backend.QfcS3Boto3Storage",
        "OPTIONS": {
            "access_key": "YOUR_AWS_ACCESS_KEY_ID",
            "secret_key": "YOUR_AWS_SECRET_ACCESS_KEY",
            "bucket_name": "your-qfieldcloud-bucket",
            "region_name": "eu-central-1",
            "endpoint_url": ""
        },
        "QFC_IS_LEGACY": false
    }
}'

AWS S3 Setup

  1. Create S3 Bucket:
aws s3 mb s3://your-qfieldcloud-bucket --region eu-central-1
  1. Enable Versioning:
aws s3api put-bucket-versioning \
  --bucket your-qfieldcloud-bucket \
  --versioning-configuration Status=Enabled
  1. Configure Lifecycle Policy (Optional):
{
  "Rules": [
    {
      "Id": "DeleteOldVersions",
      "Status": "Enabled",
      "NoncurrentVersionExpiration": {
        "NoncurrentDays": 90
      }
    }
  ]
}
  1. Create IAM User:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:ListBucket",
        "s3:GetObjectVersion",
        "s3:DeleteObjectVersion"
      ],
      "Resource": [
        "arn:aws:s3:::your-qfieldcloud-bucket",
        "arn:aws:s3:::your-qfieldcloud-bucket/*"
      ]
    }
  ]
}

MinIO (Standalone)

For development or self-hosted deployments without external S3.

Docker Compose Configuration

In docker-compose.override.standalone.yml:
services:
  minio:
    image: minio/minio:RELEASE.2025-02-18T16-25-55Z
    restart: unless-stopped
    volumes:
      - minio_data1:/data1
      - minio_data2:/data2
      - minio_data3:/data3
      - minio_data4:/data4
    environment:
      MINIO_ROOT_USER: ${MINIO_ROOT_USER}
      MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
      MINIO_BROWSER_REDIRECT_URL: http://${QFIELDCLOUD_HOST}:${MINIO_BROWSER_PORT}
    command: server /data{1...4} --console-address :9001
    ports:
      - ${MINIO_BROWSER_PORT}:9001
      - ${MINIO_API_PORT}:9000

  createbuckets:
    build:
      context: ./docker-createbuckets
    depends_on:
      minio:
        condition: service_healthy
    environment:
      STORAGES: ${STORAGES}

volumes:
  minio_data1:
  minio_data2:
  minio_data3:
  minio_data4:

Environment Configuration

# MinIO credentials (change these!)
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=minioadmin
MINIO_API_PORT=8009
MINIO_BROWSER_PORT=8010

# Storage configuration
STORAGES='{
    "default": {
        "BACKEND": "qfieldcloud.filestorage.backend.QfcS3Boto3Storage",
        "OPTIONS": {
            "access_key": "minioadmin",
            "secret_key": "minioadmin",
            "bucket_name": "qfieldcloud-local",
            "region_name": "",
            "endpoint_url": "http://172.17.0.1:8009"
        },
        "QFC_IS_LEGACY": false
    }
}'
Docker Network: The endpoint URL http://172.17.0.1:8009 uses the Docker bridge network IP. On Windows/Mac, use http://host.docker.internal:8009 instead.

Access MinIO Console

http://your-domain:8010
Username: minioadmin
Password: minioadmin

MinIO Client (mc)

Manage MinIO from command line:
# Configure alias
mc alias set local http://localhost:8009 minioadmin minioadmin

# List buckets
mc ls local

# Copy files
mc cp file.txt local/qfieldcloud-local/

# Enable versioning
mc version enable local/qfieldcloud-local

WebDAV Storage (Experimental)

Alternative storage backend using WebDAV protocol.

Basic WebDAV Configuration

STORAGES='{
    "default": {
        "BACKEND": "qfieldcloud.filestorage.backend.QfcWebDavStorage",
        "OPTIONS": {
            "webdav_url": "http://qfc_webdav_user:qfc_webdav_pwd@webdav",
            "public_url": "http://webdav",
            "basic_auth": "qfc_webdav_user:qfc_webdav_pwd"
        },
        "QFC_IS_LEGACY": false
    }
}'

NextCloud WebDAV Configuration

STORAGES='{
    "default": {
        "BACKEND": "qfieldcloud.filestorage.backend.QfcWebDavStorage",
        "OPTIONS": {
            "webdav_url": "https://USERNAME:[email protected]/remote.php/dav/files/USERNAME",
            "public_url": "https://my.nextcloud.server/public.php/webdav",
            "basic_auth": "NEXTCLOUD_SHARE_TOKEN:"
        },
        "QFC_IS_LEGACY": false
    }
}'

Multiple Storage Backends

QFieldCloud supports multiple storage backends simultaneously:
STORAGES='{
    "default": {
        "BACKEND": "qfieldcloud.filestorage.backend.QfcS3Boto3Storage",
        "OPTIONS": {
            "access_key": "aws_key",
            "secret_key": "aws_secret",
            "bucket_name": "qfc-projects",
            "region_name": "eu-central-1",
            "endpoint_url": ""
        },
        "QFC_IS_LEGACY": false
    },
    "attachments": {
        "BACKEND": "qfieldcloud.filestorage.backend.QfcS3Boto3Storage",
        "OPTIONS": {
            "access_key": "aws_key",
            "secret_key": "aws_secret",
            "bucket_name": "qfc-attachments",
            "region_name": "eu-central-1",
            "endpoint_url": ""
        },
        "QFC_IS_LEGACY": false
    }
}'

# Set default storage for new projects
STORAGES_PROJECT_DEFAULT_STORAGE=default
STORAGES_PROJECT_DEFAULT_ATTACHMENTS_STORAGE=attachments

Storage Versioning

Enable Versioning (S3/MinIO)

AWS S3:
aws s3api put-bucket-versioning \
  --bucket your-qfieldcloud-bucket \
  --versioning-configuration Status=Enabled
MinIO:
mc version enable local/qfieldcloud-local

Configure Attachments Versioning

# Enable versioning for new projects (default)
STORAGE_PROJECT_DEFAULT_ATTACHMENTS_VERSIONED=1

# Disable versioning (only for WebDAV)
STORAGE_PROJECT_DEFAULT_ATTACHMENTS_VERSIONED=0

Backup Strategies

Database Backup

Automated Backup Script

#!/bin/bash
# save as: /usr/local/bin/qfc-backup-db.sh

BACKUP_DIR="/backups/qfieldcloud/db"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/qfc_db_$DATE.sql.gz"

mkdir -p "$BACKUP_DIR"

# Create backup
docker compose -f /path/to/qfieldcloud/docker-compose.yml \
  exec -T db pg_dump -U qfieldcloud_db_admin qfieldcloud_db | gzip > "$BACKUP_FILE"

# Keep only last 30 days
find "$BACKUP_DIR" -name "qfc_db_*.sql.gz" -mtime +30 -delete

echo "Backup completed: $BACKUP_FILE"

Cron Job Setup

# Make script executable
chmod +x /usr/local/bin/qfc-backup-db.sh

# Add to crontab (daily at 2 AM)
crontab -e
Add:
0 2 * * * /usr/local/bin/qfc-backup-db.sh >> /var/log/qfc-backup.log 2>&1

Storage Backup

S3 Bucket Replication

Configure cross-region replication for disaster recovery:
# Create destination bucket
aws s3 mb s3://qfc-backup-bucket --region us-west-2

# Enable versioning on both buckets
aws s3api put-bucket-versioning \
  --bucket qfc-backup-bucket \
  --versioning-configuration Status=Enabled

# Configure replication
aws s3api put-bucket-replication \
  --bucket your-qfieldcloud-bucket \
  --replication-configuration file://replication-config.json

MinIO Backup

#!/bin/bash
# Backup MinIO data volumes

BACKUP_DIR="/backups/qfieldcloud/minio"
DATE=$(date +%Y%m%d_%H%M%S)

mkdir -p "$BACKUP_DIR"

# Stop MinIO
cd /path/to/qfieldcloud
docker compose stop minio

# Backup data volumes
for vol in minio_data1 minio_data2 minio_data3 minio_data4; do
  docker run --rm \
    -v qfieldcloud_$vol:/data \
    -v $BACKUP_DIR:/backup \
    alpine tar czf /backup/${vol}_${DATE}.tar.gz -C /data .
done

# Start MinIO
docker compose start minio

# Clean old backups (30 days)
find "$BACKUP_DIR" -name "minio_data*.tar.gz" -mtime +30 -delete

Complete Backup Script

#!/bin/bash
# Complete QFieldCloud backup

set -e

BACKUP_ROOT="/backups/qfieldcloud"
DATE=$(date +%Y%m%d_%H%M%S)
COMPOSE_DIR="/path/to/qfieldcloud"

cd "$COMPOSE_DIR"

echo "Starting QFieldCloud backup: $DATE"

# 1. Backup database
echo "Backing up database..."
mkdir -p "$BACKUP_ROOT/db"
docker compose exec -T db pg_dump -U qfieldcloud_db_admin -Fc qfieldcloud_db \
  > "$BACKUP_ROOT/db/qfc_db_$DATE.dump"

# 2. Backup storage (MinIO)
echo "Backing up storage..."
mkdir -p "$BACKUP_ROOT/storage"
mc mirror local/qfieldcloud-local "$BACKUP_ROOT/storage/qfieldcloud-local_$DATE"

# 3. Backup configuration
echo "Backing up configuration..."
mkdir -p "$BACKUP_ROOT/config"
cp .env "$BACKUP_ROOT/config/.env_$DATE"
cp docker-compose*.yml "$BACKUP_ROOT/config/"

# 4. Create archive
echo "Creating backup archive..."
cd "$BACKUP_ROOT"
tar czf "qfc_complete_backup_$DATE.tar.gz" db/qfc_db_$DATE.dump storage/qfieldcloud-local_$DATE config/

# 5. Upload to remote (optional)
# aws s3 cp "qfc_complete_backup_$DATE.tar.gz" s3://your-backup-bucket/

# 6. Cleanup old backups
find "$BACKUP_ROOT" -name "qfc_complete_backup_*.tar.gz" -mtime +30 -delete

echo "Backup completed: qfc_complete_backup_$DATE.tar.gz"

Monitoring

Database Monitoring

# Check database size
docker compose exec db psql -U qfieldcloud_db_admin -d qfieldcloud_db -c "
SELECT pg_size_pretty(pg_database_size('qfieldcloud_db'));"

# Check table sizes
docker compose exec db psql -U qfieldcloud_db_admin -d qfieldcloud_db -c "
SELECT schemaname, tablename, pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC
LIMIT 20;"

# Check active connections
docker compose exec db psql -U qfieldcloud_db_admin -d qfieldcloud_db -c "
SELECT count(*) FROM pg_stat_activity;"

Storage Monitoring

# Check MinIO storage usage
mc du local/qfieldcloud-local

# Check S3 bucket size
aws s3 ls s3://your-qfieldcloud-bucket --recursive --summarize | grep "Total Size"

# Monitor API calls (S3)
aws cloudwatch get-metric-statistics \
  --namespace AWS/S3 \
  --metric-name NumberOfObjects \
  --dimensions Name=BucketName,Value=your-qfieldcloud-bucket Name=StorageType,Value=AllStorageTypes \
  --start-time $(date -u -d '1 hour ago' +%Y-%m-%dT%H:%M:%S) \
  --end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
  --period 3600 \
  --statistics Average

Troubleshooting

Database Connection Issues

Check database is running:
docker compose ps db
docker compose logs db
Test connection:
psql -h localhost -p 5433 -U qfieldcloud_db_admin -d qfieldcloud_db
Verify credentials:
docker compose exec app env | grep POSTGRES

Storage Issues

Check storage health:
curl https://your-domain/api/v1/status/
Test MinIO connection:
mc admin info local
Check bucket permissions:
mc ls local/qfieldcloud-local
mc stat local/qfieldcloud-local

Performance Issues

Database slow queries:
-- Enable slow query logging
ALTER SYSTEM SET log_min_duration_statement = 1000; -- Log queries > 1s
SELECT pg_reload_conf();

-- View slow queries
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 20;
Storage performance:
# Check MinIO performance
mc admin speedtest local

# Check S3 transfer acceleration (if enabled)
aws s3api get-bucket-accelerate-configuration --bucket your-qfieldcloud-bucket

Next Steps

After configuring database and storage:
  1. Test backup and restore procedures
  2. Set up monitoring and alerting
  3. Configure log aggregation
  4. Review Environment Configuration
  5. Implement disaster recovery plan
  6. Document your backup procedures
  7. Schedule regular maintenance windows

Build docs developers (and LLMs) love