Installation Guide
This guide covers both using the hosted QFieldCloud service and deploying your own self-hosted instance.
Hosted Service (Recommended)
The easiest way to use QFieldCloud is through the official hosted service at qfield.cloud .
Benefits
No infrastructure management : OPENGIS.ch handles all server maintenance
Automatic updates : Always running the latest stable release
Built-in backups : Your data is professionally backed up
Default integration : Pre-configured in QField mobile app
Professional support : Access to dedicated support platform
Getting Started
Choose Plan
Select a subscription plan based on your needs:
Free tier for testing
Professional plans for teams
Enterprise options available
Start Using
No additional configuration needed - the hosted service is ready to use with QGIS and QField
Self-Hosted Deployment
For organizations requiring data sovereignty, custom configurations, or air-gapped environments, QFieldCloud can be self-hosted.
Production Considerations QFieldCloud is designed to work with externally managed services for production deployments:
PostgreSQL/PostGIS database (managed service recommended)
Object Storage (S3-compatible service)
SMTP server for email notifications
The standalone docker-compose configuration is provided for development and testing only. The maintainers do not guarantee compatibility between versions and may close standalone deployment issues without explanation.
System Requirements
Minimum:
2 CPU cores
4 GB RAM
50 GB storage
Docker Engine 20.10+
Docker Compose 2.0+
Recommended for Production:
4+ CPU cores
8+ GB RAM
SSD storage
Load balancer for high availability
Managed PostgreSQL service
S3-compatible object storage
Installation Steps
Clone Repository
Clone the QFieldCloud repository with all submodules: git clone --recurse-submodules [email protected] :opengisch/QFieldCloud.git
cd QFieldCloud
To fetch upstream development updates: git pull --recurse-submodules && git submodule update --recursive
Configure Environment
Copy the example environment file and edit it: cp .env.example .env
vi .env
Essential Configuration Variables
General Settings: # Set to 0 for production!
DEBUG = 0
# Your domain name (no http:// or trailing slash)
QFIELDCLOUD_HOST = qfieldcloud.yourcompany.com
# Environment type
ENVIRONMENT = production
Security Settings: # Generate with: pwgen -sn 128
SECRET_KEY = your-secret-key-here
SALT_KEY = your-salt-key-here
Database Configuration: POSTGRES_USER = qfieldcloud_db_admin
POSTGRES_PASSWORD = your-secure-password
POSTGRES_DB = qfieldcloud_db
POSTGRES_HOST = db # or external host
POSTGRES_PORT = 5432
POSTGRES_SSLMODE = require # Use "require" or "verify-full" for production
Allowed Hosts: DJANGO_ALLOWED_HOSTS = "qfieldcloud.yourcompany.com localhost"
Configure Docker Compose
For production, modify the COMPOSE_FILE variable to exclude development configurations: # Development (default)
COMPOSE_FILE = docker-compose.yml:docker-compose.override.local.yml:docker-compose.override.standalone.yml
# Production (recommended)
COMPOSE_FILE = docker-compose.yml
Build and Start Services
Build the Docker images and start containers: docker compose up -d --build
This will start all services defined in docker-compose.yml:
app: Django application (Gunicorn)
nginx: Reverse proxy with SSL
worker_wrapper: QGIS job processor
qgis: QGIS worker container
memcached: Cache server
ofelia: Cron job scheduler
For standalone development, additional services are included:
db: PostgreSQL with PostGIS
minio: S3-compatible object storage
smtp4dev: Development email server
Run Database Migrations
Apply Django database migrations: docker compose exec app python manage.py migrate
Collect Static Files
Gather CSS, JavaScript, and other static assets: docker compose exec app python manage.py collectstatic --noinput
Create Superuser
Create an admin account to access the Django admin interface: docker compose exec app python manage.py createsuperuser \
--username admin \
--email [email protected]
You’ll be prompted to set a password.
SSL/TLS Configuration
For production deployments with a public domain, use Let’s Encrypt:
Configure Let's Encrypt
Edit your .env file: LETSENCRYPT_EMAIL = [email protected]
LETSENCRYPT_RSA_KEY_SIZE = 4096
LETSENCRYPT_STAGING = 0 # Set to 1 for testing
Run Certificate Script
./scripts/init_letsencrypt.sh
This script will:
Request certificates from Let’s Encrypt
Configure automatic renewal
Set up NGINX to use the certificates
Enable in Configuration
Uncomment these lines in .env: QFIELDCLOUD_TLS_CERT = /etc/letsencrypt/live/ ${ QFIELDCLOUD_HOST } /fullchain.pem
QFIELDCLOUD_TLS_KEY = /etc/letsencrypt/live/ ${ QFIELDCLOUD_HOST } /privkey.pem
Restart Services
docker compose restart nginx
Certificates are automatically renewed by the certbot service every 12 hours.
For local development, self-signed certificates are generated automatically:
Automatic Generation
The mkcert service automatically creates certificates on first run. Certificates are stored in ./conf/nginx/certs/
Trust Certificate
On your development machine, trust the root CA: Linux (Debian/Ubuntu): sudo cp ./conf/nginx/certs/rootCA.pem /usr/local/share/ca-certificates/rootCA.crt
sudo update-ca-certificates
macOS: sudo security add-trusted-cert -d -r trustRoot \
-k /Library/Keychains/System.keychain \
./conf/nginx/certs/rootCA.pem
Verify SSL
Test the connection: curl https://localhost/api/v1/status/
To use your own certificates:
Place Certificates
Copy your certificate files to conf/nginx/certs/: cp your-cert.pem conf/nginx/certs/
cp your-key.pem conf/nginx/certs/
Generate DH Parameters
Create Diffie-Hellman parameters: openssl dhparam -out conf/nginx/dhparams/ssl-dhparams.pem 4096
Update Configuration
In .env, set: QFIELDCLOUD_TLS_CERT = /etc/nginx/certs/your-cert.pem
QFIELDCLOUD_TLS_KEY = /etc/nginx/certs/your-key.pem
QFIELDCLOUD_TLS_DHPARAMS = /etc/nginx/dhparams/ssl-dhparams.pem
Storage Configuration
MinIO (Development)
AWS S3 (Production)
WebDAV/NextCloud
The standalone configuration includes MinIO for local S3-compatible storage: MINIO_ROOT_USER = minioadmin
MINIO_ROOT_PASSWORD = minioadmin
MINIO_API_PORT = 8009
MINIO_BROWSER_PORT = 8010
STORAGES = '{
"default": {
"BACKEND": "qfieldcloud.filestorage.backend.QfcS3Boto3Storage",
"OPTIONS": {
"access_key": "minioadmin",
"secret_key": "minioadmin",
"bucket_name": "qfieldcloud-local",
"region_name": "",
"endpoint_url": "http://172.17.0.1:8009"
},
"QFC_IS_LEGACY": false
}
}'
Access MinIO console at http://localhost:8010 For production, use AWS S3 or compatible service: STORAGES = '{
"default": {
"BACKEND": "qfieldcloud.filestorage.backend.QfcS3Boto3Storage",
"OPTIONS": {
"access_key": "YOUR_AWS_ACCESS_KEY",
"secret_key": "YOUR_AWS_SECRET_KEY",
"bucket_name": "qfieldcloud-production",
"region_name": "us-east-1",
"endpoint_url": "" # Empty for AWS S3
},
"QFC_IS_LEGACY": false
}
}'
Ensure versioning is enabled on your S3 bucket for proper delta synchronization.
Integrate with NextCloud or WebDAV servers: STORAGES = '{
"webdav_nextcloud": {
"BACKEND": "qfieldcloud.filestorage.backend.QfcWebDavStorage",
"OPTIONS": {
"webdav_url": "https://USERNAME:[email protected] /remote.php/dav/files/USERNAME",
"public_url": "https://nextcloud.yourcompany.com/public.php/webdav",
"basic_auth": "NEXTCLOUD_SHARE_TOKEN:"
},
"QFC_IS_LEGACY": false
}
}'
Authentication Configuration
Password Login
OAuth2/OIDC
Signup Policy
Enable or disable password-based authentication: QFIELDCLOUD_PASSWORD_LOGIN_IS_ENABLED = 1 # 1=enabled, 0=disabled
Configure social authentication providers: SOCIALACCOUNT_PROVIDERS = '{
"google": {
"OAUTH_PKCE_ENABLED": true,
"APP": {
"client_id": "your-google-client-id",
"key": ""
}
},
"github": {
"APP": {
"client_id": "your-github-client-id",
"secret": "your-github-secret"
}
},
"openid_connect": {
"OAUTH_PKCE_ENABLED": true,
"APP": {
"provider_id": "keycloak",
"name": "Keycloak",
"client_id": "qfieldcloud",
"settings": {
"server_url": "https://keycloak.yourcompany.com/realms/main/.well-known/openid-configuration"
}
}
}
}'
Control who can create accounts: # Open signup (anyone can register)
QFIELDCLOUD_ACCOUNT_ADAPTER = qfieldcloud.core.adapters.AccountAdapterSignUpOpen
# Closed signup (admin approval required)
QFIELDCLOUD_ACCOUNT_ADAPTER = qfieldcloud.core.adapters.AccountAdapterSignUpClosed
Email Configuration
Development (smtp4dev)
Production SMTP
The standalone setup includes smtp4dev for testing: EMAIL_HOST = smtp4dev
EMAIL_PORT = 25
EMAIL_USE_TLS = False
EMAIL_USE_SSL = False
EMAIL_HOST_USER = user
EMAIL_HOST_PASSWORD = password
DEFAULT_FROM_EMAIL = webmaster@localhost
SMTP4DEV_WEB_PORT = 8012 # Web UI at http://localhost:8012
Configure your production SMTP server: EMAIL_HOST = smtp.yourcompany.com
EMAIL_PORT = 587
EMAIL_USE_TLS = True
EMAIL_USE_SSL = False
EMAIL_HOST_USER = [email protected]
EMAIL_HOST_PASSWORD = your-smtp-password
DEFAULT_FROM_EMAIL = [email protected]
Worker Configuration
Configure QGIS worker processes:
# Number of parallel worker replicas
QFIELDCLOUD_WORKER_REPLICAS = 1 # Increase for high-load environments
# Worker API endpoint
QFIELDCLOUD_WORKER_QFIELDCLOUD_URL = http://app:8000/api/v1/
# Temporary directory for worker operations
TMP_DIRECTORY = /tmp
# Request timeout in seconds
GUNICORN_TIMEOUT_S = 300
# Max requests before worker restart (prevents memory leaks)
GUNICORN_MAX_REQUESTS = 300
# Number of worker processes
GUNICORN_WORKERS = 3 # Typically 2-4 × CPU cores
# Threads per worker
GUNICORN_THREADS = 3
# Maximum upload size
NGINX_CLIENT_MAX_BODY_SIZE = 10g
# Timeout for proxied requests
NGINX_PROXY_CONNECT_TIMEOUT = 5s
NGINX_PROXY_READ_TIMEOUT = 300s
NGINX_PROXY_SEND_TIMEOUT = 300s
# Logging level
NGINX_ERROR_LOG_LEVEL = error # Options: debug, info, notice, warn, error
For high-load environments, tune your PostgreSQL connection pooling and ensure your database is properly sized. Consider using PgBouncer or similar connection pooling tools.
Health Checks and Monitoring
Verify your installation:
# Check service health
curl https://your-host/api/v1/status/
{
"database" : "ok" ,
"storage" : "ok"
}
View logs:
# All services
docker compose logs
# Specific service
docker compose logs app
docker compose logs nginx
# Follow logs in real-time
docker compose logs -f app
# NGINX access logs (formatted)
QFC_JQ = '[.ts, .ip, (.method + " " + (.status|tostring) + " " + (.resp_time|tostring) + "s"), .uri] | @tsv'
docker compose logs nginx -f --no-log-prefix | grep '"nginx"' | jq -r $QFC_JQ
Database access:
# Via docker exec
docker compose exec db psql -U qfieldcloud_db_admin -d qfieldcloud_db
# Or configure ~/.pg_service.conf for local access
Backup and Restore
Backup Database
docker compose exec db pg_dump \
-U qfieldcloud_db_admin \
-d qfieldcloud_db \
-F c -b -v \
-f /tmp/qfieldcloud_backup.dump
docker compose cp db:/tmp/qfieldcloud_backup.dump ./backups/
Backup Object Storage
For MinIO: docker compose exec minio mc mirror \
local/qfieldcloud-local \
/backup/qfieldcloud-data
For S3, use AWS CLI or your cloud provider’s backup tools.
Restore Database
docker compose cp ./backups/qfieldcloud_backup.dump db:/tmp/
docker compose exec db pg_restore \
-U qfieldcloud_db_admin \
-d qfieldcloud_db \
-v /tmp/qfieldcloud_backup.dump
Upgrading
Backup Current Installation
Always backup before upgrading (see Backup section above)
Pull Latest Code
git pull --recurse-submodules
git submodule update --recursive
Rebuild Containers
docker compose up -d --build
Run Migrations
docker compose exec app python manage.py migrate
docker compose exec app python manage.py collectstatic --noinput
Always test upgrades in a staging environment first. Review the changelog for breaking changes.
Troubleshooting
Check logs: docker compose logs app nginx
Common issues:
Port conflicts: Ensure ports in .env are available
Permission issues: Check volume mount permissions
Environment variables: Validate .env syntax
Database connection errors
Verify PostgreSQL: docker compose exec app python manage.py dbshell
Check credentials:
Ensure POSTGRES_* variables match your database
Verify POSTGRES_SSLMODE is appropriate
Test network connectivity to database host
Test storage backend: # For MinIO
docker compose exec minio mc ls local/
# Check storage configuration in Django shell
docker compose exec app python manage.py shell
>>> from django.core.files.storage import default_storage
>>> default_storage.exists( 'test' )
Let’s Encrypt rate limits:
Use LETSENCRYPT_STAGING=1 for testing
Let’s Encrypt has limits: 50 certs/week per domain
Certificate renewal: docker compose exec certbot certbot renew --dry-run
Production Checklist
Before going live, ensure:
Getting Help
Self-Hosted Issues Report bugs and issues for self-hosted deployments
Hosted Support Get support for qfield.cloud hosted service
Feature Requests Submit and vote on feature requests
Documentation Full QField and QFieldCloud documentation