After setting up your service repository with schemas and client library integration, register your service in sentry-options-automator to deploy configuration values to Kubernetes clusters.
Prerequisites
Your service schema must exist in your service repository and be merged to the default branch before proceeding. The CI fetches schemas for validation.
Before starting:
Schema exists at {your-repo}/sentry-options/schemas/{namespace}/schema.json
Schema is merged to default branch
You have the merge commit SHA
Setup steps
Register service in repos.json
Add an entry to repos.json in the sentry-options-automator repository: {
"repos" : {
"seer" : {
"url" : "https://github.com/getsentry/seer" ,
"path" : "sentry-options/" ,
"sha" : "abc123def456789..."
}
}
}
Field descriptions: Field Required Description Example urlYes GitHub repository URL https://github.com/getsentry/seerpathYes Path to schemas directory (contains {namespace}/schema.json) sentry-options/shaYes Commit SHA containing the schema (pinned for reproducibility) abc123def456...
The SHA pins the schema version used for validation. Update this SHA whenever you change your schema.
Create default values file
Create a base values file at option-values/{namespace}/default/values.yaml: option-values/seer/default/values.yaml
options :
feature.enabled : true
feature.rate_limit : 200
feature.enabled_slugs :
- getsentry
- test-org
The default/ directory contains base values inherited by all targets (regions). Only include options you want to override. Omitted options use schema defaults.
Create region-specific overrides (optional)
Create target-specific values for different deployment environments: option-values/seer/us/values.yaml
options :
feature.rate_limit : 500 # Higher limit for US region
option-values/seer/de/values.yaml
options :
feature.rate_limit : 300 # Different limit for DE region
Target values are merged with defaults:
US gets: feature.enabled: true, rate_limit: 500, enabled_slugs: [...]
DE gets: feature.enabled: true, rate_limit: 300, enabled_slugs: [...]
Submit pull request
Create a PR with your changes: git checkout -b add-seer-options
git add repos.json option-values/seer/
git commit -m "Add sentry-options config for seer"
git push origin add-seer-options
The CI will:
Fetch your schema using the SHA from repos.json
Validate all values against your schema
Check for type mismatches, unknown options, and structural errors
Repository structure
The sentry-options-automator repository structure:
sentry-options-automator/
├── repos.json # Schema source registry
├── option-values/ # Option values (new system)
│ └── {namespace}/
│ ├── default/
│ │ └── values.yaml # Base values (required)
│ ├── us/
│ │ └── values.yaml # US region overrides
│ └── de/
│ └── values.yaml # DE region overrides
├── .github/workflows/
│ └── sentry-options-validate.yml # Validation CI
└── gocd/pipelines/
└── new-sentry-options.yaml # Deployment CD pipeline
Understanding targets
What is a target?
A target represents a deployment environment or region (e.g., us, de, s4s). Each target gets its own ConfigMap with values merged from default/ plus target-specific overrides.
Target inheritance:
default/values.yaml (base)
↓
├→ us/values.yaml (default + us merged)
├→ de/values.yaml (default + de merged)
└→ s4s/values.yaml (default + s4s merged)
ConfigMap generation:
Each namespace/target combination produces one ConfigMap:
option-values/
├── seer/
│ ├── default/values.yaml → Not deployed directly (inherited by others)
│ ├── us/values.yaml → ConfigMap: sentry-options-seer (in US cluster)
│ └── de/values.yaml → ConfigMap: sentry-options-seer (in DE cluster)
└── relay/
├── default/values.yaml → Not deployed directly
└── us/values.yaml → ConfigMap: sentry-options-relay (in US cluster)
YAML files must have a single top-level options key:
options :
option-name : value
another-option : 42
array-option :
- item1
- item2
Type examples
String values
Numeric values
Boolean values
Array values
options :
system.url-prefix : "https://sentry.io"
log.level : "info"
Real example from test suite
examples/configs/sentry-options-testing/default/values.yaml
options :
example-option : "UPDATED-VALUE"
float-option : 0.99
bool-option : false
Updating schemas workflow
When you update your schema in the service repo, you must update the SHA in repos.json:
Service Repo sentry-options-automator
─────────── ─────────────────────────
PR: Update schema.json → PR: Update repos.json SHA
↓ ↓
Merge + Update values if needed
↓ ↓
(commit abc123) Merge
↓
CI validates values against new schema
↓
CD deploys new ConfigMaps
Step-by-step process
Merge schema change
Merge your schema update PR in your service repository (e.g., getsentry/seer).
Get merge commit SHA
# Get the SHA of the merge commit
git log -1 --format= "%H"
# Output: abc123def456789...
Update repos.json
Update the SHA in sentry-options-automator/repos.json: {
"repos" : {
"seer" : {
"url" : "https://github.com/getsentry/seer" ,
"path" : "sentry-options/" ,
"sha" : "abc123def456789..." // Updated SHA
}
}
}
Update values if needed
If you added new options, update option-values/{namespace}/default/values.yaml: options :
feature.enabled : true
feature.rate_limit : 200
new.option : "value" # New option from schema update
Submit PR
Create and merge the PR in sentry-options-automator. CI will validate values against the new schema.
CI validation
The CI pipeline validates your changes:
Schema fetching
sentry-options-cli fetch-schemas \
--config repos.json \
--out schemas/
Fetches all schemas from registered repositories using the SHA pins.
Values validation
sentry-options-cli validate-values \
--schemas schemas/ \
--root option-values/
Validates all YAML files against their schemas:
Type checking (string vs integer, etc.)
Unknown option detection
Array element type validation
Required field validation
Common validation errors
Type mismatch (❌)
Unknown option (❌)
Invalid array elements (❌)
options :
feature.rate_limit : "100" # Schema expects integer, got string
CD deployment
On merge to main, the CD pipeline:
Fetches schemas from all registered repos
Generates ConfigMaps for each namespace/target combination
Applies ConfigMaps to corresponding Kubernetes clusters
ConfigMap generation command
sentry-options-cli write \
--schemas schemas/ \
--root option-values/ \
--output-format configmap \
--namespace seer \
--target us \
--commit-sha " $COMMIT_SHA " \
--commit-timestamp "$( date -u +%Y-%m-%dT%H:%M:%SZ)"
This runs once per namespace/target combination (excluding default).
The generated ConfigMap:
apiVersion : v1
kind : ConfigMap
metadata :
name : sentry-options-seer
data :
values.json : |
{
"options": {
"feature.enabled": true,
"feature.rate_limit": 500
},
"generated_at": "2026-03-04T12:00:00Z"
}
Multiple namespaces
A single service can have multiple namespaces:
option-values/
├── seer/
│ ├── default/values.yaml
│ └── us/values.yaml
└── seer-autofix/
├── default/values.yaml
└── us/values.yaml
Each namespace gets its own ConfigMap:
sentry-options-seer
sentry-options-seer-autofix
Testing values locally
Before submitting your PR, test values locally:
# Clone sentry-options-automator
git clone [email protected] :getsentry/sentry-options-automator.git
cd sentry-options-automator
# Download CLI
curl -sSL -o sentry-options-cli \
https://github.com/getsentry/sentry-options/releases/download/0.0.14/sentry-options-cli-x86_64-unknown-linux-musl
chmod +x sentry-options-cli
# Fetch schemas
./sentry-options-cli fetch-schemas --config repos.json --out schemas/
# Validate your values
./sentry-options-cli validate-values \
--schemas schemas/ \
--root option-values/
# Generate ConfigMap for testing
./sentry-options-cli write \
--schemas schemas/ \
--root option-values/ \
--output-format configmap \
--namespace seer \
--target us
Next steps
Local testing Test option values during development
Schema evolution Learn about schema update rules
What NOT to put in option values
sentry-options is for feature flags and tunable parameters , not secrets.
Keep these as environment variables or Kubernetes secrets:
Database URLs, API keys, credentials
Infrastructure config (PORT, worker counts)
Sentry DSN
Authentication tokens
ConfigMaps are not encrypted and are visible to anyone with cluster access.