Introduction
DBOS Transact is configured using the DBOSConfig class or YAML configuration files. This guide covers all configuration options and best practices.
Basic Configuration
Configure DBOS programmatically:
from dbos import DBOS , DBOSConfig
config = DBOSConfig(
name = "my-application" ,
database_url = "postgresql://user:password@localhost:5432/mydb"
)
DBOS( config = config)
The application name is required. It’s used for identification and logging.
Database Configuration
System and Application Databases
DBOS uses two databases:
System Database : Stores workflow state, queues, and schedules
Application Database : For your application’s transactions
config = DBOSConfig(
name = "my-app" ,
# Separate databases (recommended for production)
system_database_url = "postgresql://user:pass@localhost:5432/dbos_system" ,
application_database_url = "postgresql://user:pass@localhost:5432/app_data"
)
Using separate databases allows independent scaling and backup strategies.
Single Database Setup
config = DBOSConfig(
name = "my-app" ,
# Single database for both system and application data
database_url = "postgresql://user:pass@localhost:5432/mydb"
)
SQLite Configuration
# SQLite system database (development only)
config = DBOSConfig(
name = "my-app" ,
system_database_url = "sqlite:///dbos_system.db" ,
application_database_url = "postgresql://user:pass@localhost:5432/app_db"
)
SQLite is for development only. Use PostgreSQL for production deployments.
Database Pool Configuration
config = DBOSConfig(
name = "my-app" ,
database_url = "postgresql://user:pass@localhost:5432/mydb" ,
# System database pool size
sys_db_pool_size = 10 ,
# Advanced engine configuration
db_engine_kwargs = {
"pool_size" : 20 ,
"max_overflow" : 10 ,
"pool_timeout" : 30 ,
"pool_pre_ping" : True ,
"echo" : False , # Set to True for SQL logging
"pool_recycle" : 3600 # Recycle connections after 1 hour
}
)
Custom Database Engine
import sqlalchemy as sa
# Create custom engine
engine = sa.create_engine(
"postgresql://user:pass@localhost:5432/dbos" ,
pool_size = 15 ,
max_overflow = 5
)
config = DBOSConfig(
name = "my-app" ,
system_database_engine = engine
)
Logging Configuration
Basic Logging
config = DBOSConfig(
name = "my-app" ,
database_url = "postgresql://user:pass@localhost:5432/mydb" ,
# Set log level
log_level = "INFO" # DEBUG, INFO, WARNING, ERROR, CRITICAL
)
Separate Log Levels
config = DBOSConfig(
name = "my-app" ,
database_url = "postgresql://user:pass@localhost:5432/mydb" ,
# Overall log level
log_level = "INFO" ,
# Console-specific log level (must be >= log_level)
console_log_level = "WARNING" ,
# OTLP-specific log level (if OTLP enabled)
otlp_log_level = "ERROR"
)
Telemetry and Observability
Enable OTLP Export
config = DBOSConfig(
name = "my-app" ,
database_url = "postgresql://user:pass@localhost:5432/mydb" ,
# Enable OTLP
enable_otlp = True ,
# Configure endpoints
otlp_traces_endpoints = [ "http://localhost:4318/v1/traces" ],
otlp_logs_endpoints = [ "http://localhost:4318/v1/logs" ],
# Custom attributes for all traces/logs
otlp_attributes = {
"service.environment" : "production" ,
"service.version" : "1.2.3" ,
"deployment.region" : "us-east-1"
}
)
Multiple OTLP Endpoints
config = DBOSConfig(
name = "my-app" ,
database_url = "postgresql://user:pass@localhost:5432/mydb" ,
enable_otlp = True ,
# Send to multiple observability platforms
otlp_traces_endpoints = [
"http://jaeger:4318/v1/traces" ,
"http://tempo:4318/v1/traces"
],
otlp_logs_endpoints = [
"http://loki:4318/v1/logs"
]
)
Runtime Configuration
Admin Server
config = DBOSConfig(
name = "my-app" ,
database_url = "postgresql://user:pass@localhost:5432/mydb" ,
# Admin server settings
admin_port = 8080 ,
run_admin_server = True # Set to False to disable
)
Thread Pool Configuration
config = DBOSConfig(
name = "my-app" ,
database_url = "postgresql://user:pass@localhost:5432/mydb" ,
# Maximum concurrent workflow executions
max_executor_threads = 100
)
Polling Intervals
config = DBOSConfig(
name = "my-app" ,
database_url = "postgresql://user:pass@localhost:5432/mydb" ,
# Notification listener polling (for LISTEN/NOTIFY)
notification_listener_polling_interval_sec = 1.0 , # Min: 0.001
# Scheduler polling for new schedules
scheduler_polling_interval_sec = 30.0
)
Lower polling intervals reduce latency but increase database load. Adjust based on your needs.
Advanced Configuration
Application Version
import os
config = DBOSConfig(
name = "my-app" ,
database_url = "postgresql://user:pass@localhost:5432/mydb" ,
# Manually set application version
application_version = os.getenv( "APP_VERSION" , "1.0.0" )
)
If not set, DBOS automatically generates a version hash from workflow source code.
Executor ID
import socket
config = DBOSConfig(
name = "my-app" ,
database_url = "postgresql://user:pass@localhost:5432/mydb" ,
# Unique executor ID for distributed deployments
executor_id = f " { socket.gethostname() } - { os.getpid() } "
)
System Schema Name
config = DBOSConfig(
name = "my-app" ,
database_url = "postgresql://user:pass@localhost:5432/mydb" ,
# Custom schema name for DBOS system tables
dbos_system_schema = "dbos_custom"
)
LISTEN/NOTIFY Settings
config = DBOSConfig(
name = "my-app" ,
database_url = "postgresql://user:pass@localhost:5432/mydb" ,
# Use LISTEN/NOTIFY for real-time notifications (PostgreSQL only)
use_listen_notify = True # False for polling-based approach
)
use_listen_notify affects database migrations. It cannot be changed after the system database is created.
Custom Serialization
from dbos import Serializer
import json
import pickle
class CustomSerializer ( Serializer ):
def serialize ( self , obj ):
"""Custom serialization logic."""
return pickle.dumps(obj)
def deserialize ( self , data ):
"""Custom deserialization logic."""
return pickle.loads(data)
config = DBOSConfig(
name = "my-app" ,
database_url = "postgresql://user:pass@localhost:5432/mydb" ,
# Use custom serializer
serializer = CustomSerializer()
)
Environment-Based Configuration
import os
from dbos import DBOSConfig
def get_config () -> DBOSConfig:
"""Get configuration based on environment."""
env = os.getenv( "ENVIRONMENT" , "development" )
if env == "production" :
return DBOSConfig(
name = "my-app" ,
system_database_url = os.getenv( "SYSTEM_DB_URL" ),
application_database_url = os.getenv( "APP_DB_URL" ),
log_level = "INFO" ,
enable_otlp = True ,
otlp_traces_endpoints = [os.getenv( "OTLP_ENDPOINT" )],
admin_port = 8080 ,
max_executor_threads = 100 ,
sys_db_pool_size = 20
)
elif env == "staging" :
return DBOSConfig(
name = "my-app-staging" ,
database_url = os.getenv( "DATABASE_URL" ),
log_level = "DEBUG" ,
enable_otlp = True ,
otlp_traces_endpoints = [ "http://localhost:4318/v1/traces" ],
admin_port = 8080
)
else : # development
return DBOSConfig(
name = "my-app-dev" ,
database_url = "sqlite:///dev.db" ,
log_level = "DEBUG" ,
enable_otlp = False ,
admin_port = 3000
)
# Use environment-specific config
DBOS( config = get_config())
YAML Configuration
Create dbos-config.yaml:
name : my-application
database :
sys_db_pool_size : 10
db_engine_kwargs :
pool_size : 20
max_overflow : 10
pool_timeout : 30
pool_pre_ping : true
database_url : postgresql://user:password@localhost:5432/app_db
system_database_url : postgresql://user:password@localhost:5432/dbos_system
runtimeConfig :
admin_port : 8080
run_admin_server : true
max_executor_threads : 100
notification_listener_polling_interval_sec : 1.0
scheduler_polling_interval_sec : 30.0
telemetry :
logs :
logLevel : INFO
consoleLogLevel : WARNING
otlpLogLevel : ERROR
OTLPExporter :
tracesEndpoint :
- http://localhost:4318/v1/traces
logsEndpoint :
- http://localhost:4318/v1/logs
otlp_attributes :
service.environment : production
service.version : 1.2.3
disable_otlp : false
Complete Configuration Example
from dbos import DBOS , DBOSConfig
import os
# Comprehensive production configuration
config = DBOSConfig(
# Application identity
name = "payment-processor" ,
application_version = os.getenv( "APP_VERSION" , "1.0.0" ),
executor_id = os.getenv( "EXECUTOR_ID" , "default-executor" ),
# Database configuration
system_database_url = os.getenv(
"SYSTEM_DB_URL" ,
"postgresql://dbos:password@localhost:5432/dbos_system"
),
application_database_url = os.getenv(
"APP_DB_URL" ,
"postgresql://app:password@localhost:5432/app_data"
),
dbos_system_schema = "dbos" ,
use_listen_notify = True ,
# Connection pooling
sys_db_pool_size = 20 ,
db_engine_kwargs = {
"pool_size" : 30 ,
"max_overflow" : 20 ,
"pool_timeout" : 30 ,
"pool_pre_ping" : True ,
"pool_recycle" : 3600 ,
"echo" : False
},
# Logging
log_level = "INFO" ,
console_log_level = "WARNING" ,
otlp_log_level = "ERROR" ,
# Telemetry
enable_otlp = True ,
otlp_traces_endpoints = [os.getenv( "OTLP_TRACES_ENDPOINT" )],
otlp_logs_endpoints = [os.getenv( "OTLP_LOGS_ENDPOINT" )],
otlp_attributes = {
"service.name" : "payment-processor" ,
"service.environment" : os.getenv( "ENVIRONMENT" , "production" ),
"service.version" : os.getenv( "APP_VERSION" , "1.0.0" ),
"deployment.region" : os.getenv( "AWS_REGION" , "us-east-1" )
},
# Runtime settings
admin_port = int (os.getenv( "ADMIN_PORT" , "8080" )),
run_admin_server = True ,
max_executor_threads = 100 ,
notification_listener_polling_interval_sec = 0.5 ,
scheduler_polling_interval_sec = 30.0
)
# Initialize DBOS
dbos = DBOS( config = config)
Docker Configuration
# docker-compose.yml friendly configuration
import os
config = DBOSConfig(
name = os.getenv( "APP_NAME" , "my-app" ),
# Docker service names
system_database_url = (
f "postgresql:// { os.getenv( 'POSTGRES_USER' , 'postgres' ) } :"
f " { os.getenv( 'POSTGRES_PASSWORD' , 'password' ) } @"
f " { os.getenv( 'POSTGRES_HOST' , 'postgres' ) } :"
f " { os.getenv( 'POSTGRES_PORT' , '5432' ) } /"
f " { os.getenv( 'SYSTEM_DB_NAME' , 'dbos_system' ) } "
),
# OTLP collector in Docker
enable_otlp = True ,
otlp_traces_endpoints = [ f "http:// { os.getenv( 'OTLP_HOST' , 'otel-collector' ) } :4318/v1/traces" ],
log_level = os.getenv( "LOG_LEVEL" , "INFO" )
)
Configuration Validation
from dbos import DBOSConfig
from dbos._error import DBOSInitializationError
def validate_config ( config : DBOSConfig) -> None :
"""Validate configuration before initialization."""
# Check required fields
if not config.get( "name" ):
raise DBOSInitializationError( "Application name is required" )
# Validate database URLs
if not config.get( "database_url" ) and not config.get( "system_database_url" ):
raise DBOSInitializationError( "Database URL is required" )
# Validate log levels
valid_levels = [ "DEBUG" , "INFO" , "WARNING" , "ERROR" , "CRITICAL" ]
log_level = config.get( "log_level" , "INFO" )
if log_level not in valid_levels:
raise DBOSInitializationError( f "Invalid log_level: { log_level } " )
# Validate polling intervals
notif_interval = config.get( "notification_listener_polling_interval_sec" , 1.0 )
if notif_interval < 0.001 :
raise DBOSInitializationError(
"notification_listener_polling_interval_sec must be >= 0.001"
)
print ( "Configuration validated successfully" )
# Use validation
config = get_config()
validate_config(config)
DBOS( config = config)
Best Practices
Use separate system and application databases in production
Configure appropriate pool sizes based on workload
Enable pool_pre_ping for connection health checks
Use connection recycling to prevent stale connections
Use environment variables for sensitive data
Create environment-specific configurations
Never commit credentials to version control
Use Docker secrets or cloud secret managers
Set appropriate log levels per environment
Use OTLP for centralized observability
Add custom OTLP attributes for filtering
Monitor admin server metrics
Troubleshooting
Problem: Cannot connect to database Solutions:
Verify database URL format
Check database is running and accessible
Verify credentials
Check firewall/network settings
Enable pool_pre_ping for connection testing
Problem: “QueuePool limit exceeded” Solutions:
Increase pool_size and max_overflow
Check for connection leaks
Review pool_timeout settings
Monitor long-running transactions
Problem: Traces/logs not appearing Solutions:
Verify OTLP endpoints are correct
Check collector is running
Verify network connectivity
Check OTLP collector logs
Verify enable_otlp=True
Next Steps
Workflow Tutorial Start building workflows with your configuration
Error Handling Configure retry and recovery strategies
Queue Tutorial Configure queues with custom settings
Workflow Management Use DBOSClient with your configuration