YAML config file
Full example with all available fields:LoadConfig starts from DefaultConfig() and unmarshals the YAML on top, so fields not present in the file retain their defaults.
DefaultConfig() defaults
Callingmembrane.DefaultConfig() produces:
| Field | Default value |
|---|---|
Backend | "sqlite" |
DBPath | "membrane.db" |
ListenAddr | ":9090" |
DecayInterval | 1h |
ConsolidationInterval | 6h |
DefaultSensitivity | "low" |
SelectionConfidenceThreshold | 0.7 |
EncryptionKey | "" (unencrypted) |
EmbeddingDimensions | 1536 |
RateLimitPerSecond | 100 |
Config struct fields
Storage
Storage backend. Options:
"sqlite" (default) or "postgres".SQLite database file path. Default:
"membrane.db". Ignored when backend is "postgres".PostgreSQL connection string. Required when
backend is "postgres". Falls back to the MEMBRANE_POSTGRES_DSN environment variable if not set in config.Example: postgres://membrane:membrane@localhost:5432/membrane?sslmode=disableNetwork
gRPC server listen address. Default:
":9090".Scheduling
How often the decay scheduler runs. Default:
"1h". Accepts Go duration strings ("30m", "2h", etc.).How often the consolidation scheduler runs. Default:
"6h". Consolidation extracts semantic facts and competence records from episodic traces.Ingestion
Default sensitivity level assigned to records at ingestion when not explicitly overridden. Default:
"low".Valid values: public, low, medium, high, hyper.Retrieval
Minimum confidence for competence and plan_graph candidates to pass the selector. Default:
0.7.Embedding
HTTP endpoint for generating record and query embeddings. Requires
backend: postgres and embedding_model. Example: https://api.openai.com/v1/embeddings.Embedding model name sent to the embedding endpoint. Example:
text-embedding-3-small.Output dimension of the embedding model. Default:
1536. Must match the model’s actual output dimensions.API key for the embedding endpoint. Prefer setting
MEMBRANE_EMBEDDING_API_KEY instead.LLM
HTTP endpoint for the LLM used in semantic extraction during consolidation. Requires
backend: postgres. Example: https://api.openai.com/v1/chat/completions.Chat model name sent to the LLM endpoint. Example:
gpt-5-mini.API key for the LLM endpoint. Prefer setting
MEMBRANE_LLM_API_KEY instead.Security
SQLCipher
PRAGMA key value for encrypting the SQLite database at rest. Falls back to MEMBRANE_ENCRYPTION_KEY. When empty, the database is unencrypted.Path to TLS certificate PEM file. TLS is disabled when empty.
Path to TLS private key PEM file. Required when
tls_cert_file is set.Shared secret for gRPC client authentication via bearer token. Falls back to
MEMBRANE_API_KEY. Authentication is disabled when empty.Maximum gRPC requests per second per client.
0 disables rate limiting. Default: 100.Environment variables
| Variable | Config field | Description |
|---|---|---|
MEMBRANE_ENCRYPTION_KEY | encryption_key | SQLCipher encryption key for the SQLite database |
MEMBRANE_POSTGRES_DSN | postgres_dsn | PostgreSQL DSN, read when backend: postgres |
MEMBRANE_EMBEDDING_API_KEY | embedding_api_key | API key for the embedding endpoint |
MEMBRANE_LLM_API_KEY | llm_api_key | API key for the LLM semantic extraction endpoint |
MEMBRANE_API_KEY | api_key | Bearer token for gRPC client authentication |
Command-line flags
Flags override values set in the config file.| Flag | Overrides | Description |
|---|---|---|
--config <path> | — | Load YAML config file from <path> |
--db <path> | db_path | SQLite database path |
--postgres-dsn <dsn> | postgres_dsn + backend | PostgreSQL DSN; also sets backend = "postgres" |
--addr <addr> | listen_addr | gRPC listen address |
--version | — | Print version and exit |