Overview
Sentry-options uses Kubernetes ConfigMaps to deliver option values to pods. The deployment process involves:
- CI validation - Values are validated against schemas
- ConfigMap generation - CLI merges YAML files into JSON ConfigMaps
- Deployment - CD pipeline applies ConfigMaps to clusters
- Mounting - Pods automatically mount ConfigMaps via annotations
Architecture
sentry-options-automator repo
└── option-values/
└── {namespace}/
├── default/ # Base values (all targets inherit)
│ └── values.yaml
└── {region}/ # Region-specific overrides
└── values.yaml
↓
CI validates
↓
CLI generates ConfigMap
↓
CD deploys to K8s cluster
↓
Injector mounts to pods
↓
Client library reads files
sentry-options-automator structure
The automator repo manages option values and schema references:
sentry-options-automator/
├── repos.json # Schema sources (URL, path, SHA)
├── option-values/ # Values for sentry-options
│ └── {namespace}/
│ ├── default/
│ │ └── values.yaml # Base values (required)
│ ├── us/
│ │ └── values.yaml # US region overrides
│ └── de/
│ └── values.yaml # DE region overrides
└── .github/workflows/
└── sentry-options-validate.yml
repos.json
Tracks which service repos contain schemas:
{
"repos": {
"seer": {
"url": "https://github.com/getsentry/seer",
"path": "sentry-options/",
"sha": "abc123def456..."
},
"relay": {
"url": "https://github.com/getsentry/relay",
"path": "sentry-options/",
"sha": "def456abc789..."
}
}
}
Important: When you update your schema in the service repo, you must update the SHA in repos.json to point to the commit containing the new schema.
Values directory structure
Each namespace requires a default/ target with at least one values.yaml file:
option-values/seer/
├── default/
│ ├── core.yaml # Core options
│ └── features.yaml # Feature flags
└── us/
└── overrides.yaml # US-specific values
Multiple files per target: You can split options across multiple YAML files within a target directory. Files are merged together (duplicate keys cause validation errors).
Targets and regions
What is a target?
A target represents a deployment environment or region (e.g., us, de, s4s). Each target:
- Inherits all values from
default/
- Can override specific values
- Gets its own ConfigMap
Target override behavior
# option-values/seer/default/values.yaml
options:
feature.enabled: false
feature.rate_limit: 100
feature.timeout_ms: 5000
# option-values/seer/us/values.yaml
options:
feature.enabled: true # Overrides default
feature.rate_limit: 200 # Overrides default
# feature.timeout_ms inherits from default (5000)
Result: The us target gets a ConfigMap with:
{
"options": {
"feature.enabled": true,
"feature.rate_limit": 200,
"feature.timeout_ms": 5000
}
}
ConfigMap naming
Each namespace/target combination produces a ConfigMap named sentry-options-{namespace}:
option-values/seer/us/ → sentry-options-seer (deployed to US cluster)
option-values/seer/de/ → sentry-options-seer (deployed to DE cluster)
option-values/relay/us/ → sentry-options-relay (deployed to US cluster)
Note: The target name is not in the ConfigMap name. Instead, the CD pipeline maps targets to clusters (e.g., us target → US cluster, de target → DE cluster).
ConfigMap generation
CLI usage
The write command generates ConfigMaps:
sentry-options-cli write \
--schemas schemas/ \
--root option-values/ \
--output-format configmap \
--namespace seer \
--target us \
--commit-sha "$COMMIT_SHA" \
--commit-timestamp "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
Outputs YAML to stdout:
apiVersion: v1
kind: ConfigMap
metadata:
name: sentry-options-seer
annotations:
generated_at: "2026-03-04T12:00:00Z"
commit_sha: "abc123..."
commit_timestamp: "2026-03-04T11:55:00Z"
data:
values.json: |
{
"options": {
"feature.enabled": true,
"feature.rate_limit": 200
},
"generated_at": "2026-03-04T12:00:00Z"
}
Annotations
ConfigMaps include metadata for observability:
generated_at - When the CLI generated the ConfigMap (ISO 8601)
commit_sha - Git commit that triggered the generation
commit_timestamp - When the commit was made (for SLO tracking)
These timestamps are also embedded in the values.json data and used by the client library to calculate propagation delay.
CD pipeline
The CD pipeline (configured in gocd/pipelines/new-sentry-options.yaml) runs on every merge to main:
- Fetch schemas from repos.json
- Validate all values against schemas
- Generate ConfigMaps for each namespace/target combination
- Apply ConfigMaps to appropriate Kubernetes clusters
Example pipeline logic:
# For each namespace in option-values/
for namespace in option-values/*/; do
# For each target except default/
for target in $namespace/*/; do
if [[ $(basename $target) != "default" ]]; then
# Generate ConfigMap
sentry-options-cli write \
--namespace $(basename $namespace) \
--target $(basename $target) \
--output-format configmap \
| kubectl apply -f - --context=$(get_cluster_for_target $target)
fi
done
done
Pod configuration
Adding pod annotations
To mount sentry-options ConfigMaps, add these annotations to your deployment:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: seer
spec:
template:
metadata:
annotations:
options.sentry.io/inject: 'true'
options.sentry.io/namespace: seer
spec:
containers:
- name: seer
image: seer:latest
env:
- name: SENTRY_OPTIONS_DIR
value: /etc/sentry-options
Multiple namespaces: Use a comma-separated list:
options.sentry.io/namespace: seer-autofix,seer-grouping,seer
How the injector works
The sentry-options injector (a Kubernetes mutating webhook or controller) watches for pods with the options.sentry.io/inject annotation and automatically:
- Adds volumes for each namespace ConfigMap:
volumes:
- name: sentry-options-seer
configMap:
name: sentry-options-seer
- Adds volume mounts to containers:
volumeMounts:
- name: sentry-options-seer
mountPath: /etc/sentry-options/values/seer
readOnly: true
You don’t need to manually configure volumes or mounts - the injector handles this based on annotations.
Directory structure in pods
After injection, pods have this file layout:
/etc/sentry-options/
├── schemas/ # From Docker image
│ └── seer/
│ └── schema.json
└── values/ # From ConfigMap
└── seer/
└── values.json
The client library reads both schemas (for validation and defaults) and values (for runtime config).
Hot-reload behavior
When you update values in sentry-options-automator:
- CI validates new values
- CD deploys updated ConfigMap (~30 seconds)
- Kubelet syncs ConfigMap to pod filesystem (~60-90 seconds)
- Client library detects file change (~5 seconds)
Total latency: ~2 minutes from merge to application reload
No pod restart required - values update automatically while the pod runs.
Deployment workflow
Initial setup
- Service repo: Create and merge schema
- Automator repo: Add entry to repos.json with schema SHA
- Automator repo: Create values files for default and any region targets
- Wait for CD: ConfigMaps deploy to all clusters
- ops repo: Add pod annotations to deployment
- Deploy: Roll out pods with annotations
Pods start normally without ConfigMaps (using schema defaults). You can add pod annotations anytime - the injector will handle missing ConfigMaps gracefully.
Updating values
- Edit YAML files in
option-values/{namespace}/{target}/
- Create PR, wait for CI validation
- Merge to main
- CD automatically deploys updated ConfigMaps
- Pods reload values within ~2 minutes
Updating schemas
- Service repo: Update schema, merge PR
- Get commit SHA from merge
- Automator repo: Update SHA in repos.json
- Automator repo: Update values if needed (same PR)
- Merge automator PR
- CD validates new values against new schema and deploys
Observability
The client library emits Sentry transactions on every reload with metrics:
reload_duration_ms - How long the reload took
propagation_delay_secs - Time from ConfigMap generation to application
generated_at - When ConfigMap was created
applied_at - When application loaded values
Use these metrics to track deployment latency and catch configuration issues.
Troubleshooting
See Troubleshooting for common deployment issues and solutions.