Skip to main content

Configuration

Datum Cloud can be configured through environment variables, command-line flags, and Kubernetes manifests. This guide covers all available configuration options.

Configuration Methods

Environment Variables

Set via deployment manifests

Command-Line Flags

Pass to controller manager

Config Files

YAML configuration files

Controller Manager Configuration

The Datum controller manager accepts various configuration options. Default configuration from config/manager/manager.yaml:51:

Command-Line Arguments

args:
  - controller-manager
  - --metrics-bind-address=$(METRICS_BIND_ADDRESS)
  - --health-probe-bind-address=$(HEALTH_PROBE_BIND_ADDRESS)
  - --leader-elect=$(LEADER_ELECT)
  - --leader-election-id=$(LEADER_ELECTION_ID)
  - --leader-election-namespace=$(LEADER_ELECTION_NAMESPACE)
  - --leader-election-lease-duration=$(LEADER_ELECTION_LEASE_DURATION)
  - --leader-election-renew-deadline=$(LEADER_ELECTION_RENEW_DEADLINE)
  - --leader-election-retry-period=$(LEADER_ELECTION_RETRY_PERIOD)
  - --leader-election-release-on-cancel=$(LEADER_ELECTION_RELEASE_ON_CANCEL)
  - --metrics-secure=$(METRICS_SECURE)
  - --enable-http2=$(ENABLE_HTTP2)
  - --config=$(CONFIG_FILE)

Environment Variables

From config/manager/manager.yaml:71:
METRICS_BIND_ADDRESSAddress for Prometheus metrics endpoint.
  • Default: "0" (disabled, use METRICS_SECURE instead)
  • Format: <host>:<port> or "0" to disable
  • Example: ":8080"
METRICS_SECUREEnable secure metrics endpoint with TLS.
  • Default: "true"
  • Values: "true" or "false"
METRICS_CERT_PATHPath to TLS certificates for metrics.
  • Default: "" (uses default paths)
  • Example: "/certs/metrics"
METRICS_CERT_NAMECertificate filename.
  • Default: "tls.crt"
METRICS_CERT_KEYPrivate key filename.
  • Default: "tls.key"

Customizing Configuration

Via kubectl

Edit the deployment directly:
kubectl edit deployment datum-controller-manager -n datum-system
Find the env section and modify values:
env:
  - name: METRICS_BIND_ADDRESS
    value: ":8080"  # Change from "0"
  - name: LEADER_ELECTION_LEASE_DURATION
    value: "30s"    # Change from "15s"

Via Kustomize

Create a kustomization overlay:
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - github.com/datum-cloud/datum/config/default?ref=main

patches:
  - patch: |-
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: controller-manager
        namespace: datum-system
      spec:
        template:
          spec:
            containers:
              - name: datum-controller-manager
                env:
                  - name: LEADER_ELECTION_LEASE_DURATION
                    value: "30s"
                  - name: LEADER_ELECTION_RENEW_DEADLINE
                    value: "20s"
    target:
      kind: Deployment
      name: controller-manager
Apply:
kubectl apply -k .

Via Helm Values (Coming Soon)

values.yaml
controllerManager:
  env:
    LEADER_ELECTION_LEASE_DURATION: "30s"
    METRICS_BIND_ADDRESS: ":8080"
  
  resources:
    limits:
      cpu: 1000m
      memory: 256Mi
    requests:
      cpu: 100m
      memory: 128Mi

Resource Limits

Default resource limits from config/manager/manager.yaml:130:
resources:
  limits:
    cpu: 500m
    memory: 128Mi
  requests:
    cpu: 10m
    memory: 64Mi

Adjusting for Scale

< 100 resources
resources:
  limits:
    cpu: 500m
    memory: 128Mi
  requests:
    cpu: 10m
    memory: 64Mi

Quota Policy Configuration

Quota policies are defined in config/services/resourcemanager.miloapis.com/quota/.

Customizing Project Quotas

Personal Organization Quota (grant-policies/personal-org-grant-policy.yaml:32):
spec:
  allowances:
    - resourceType: resourcemanager.miloapis.com/projects
      buckets:
        - amount: 2  # Change to desired limit
Standard Organization Quota (grant-policies/standard-org-grant-policy.yaml:32):
spec:
  allowances:
    - resourceType: resourcemanager.miloapis.com/projects
      buckets:
        - amount: 10  # Change to desired limit
Apply changes:
kubectl apply -k config/services/resourcemanager.miloapis.com/quota

Adding New Quota Types

Create a ResourceRegistration:
apiVersion: quota.miloapis.com/v1alpha1
kind: ResourceRegistration
metadata:
  name: workloads-per-project
spec:
  consumerType:
    apiGroup: resourcemanager.miloapis.com
    kind: Project
  
  type: Entity
  resourceType: compute.datumapis.com/workloads
  
  description: "Maximum number of workloads per project"
  
  baseUnit: workload
  displayUnit: workloads
  unitConversionFactor: 1
  
  claimingResources:
    - apiGroup: compute.datumapis.com
      kind: Workload
Create a GrantCreationPolicy:
apiVersion: quota.miloapis.com/v1alpha1
kind: GrantCreationPolicy
metadata:
  name: project-workload-quota-policy
spec:
  trigger:
    resource:
      apiVersion: resourcemanager.miloapis.com/v1alpha1
      kind: Project
  target:
    resourceGrantTemplate:
      metadata:
        name: "workload-quota"
        namespace: "project-{{ trigger.metadata.name }}"
      spec:
        consumerRef:
          apiGroup: resourcemanager.miloapis.com
          kind: Project
          name: "{{ trigger.metadata.name }}"
        allowances:
          - resourceType: compute.datumapis.com/workloads
            buckets:
              - amount: 50
Create a ClaimCreationPolicy:
apiVersion: quota.miloapis.com/v1alpha1
kind: ClaimCreationPolicy
metadata:
  name: workload-quota-enforcement
spec:
  trigger:
    resource:
      apiVersion: compute.datumapis.com/v1alpha1
      kind: Workload
  target:
    resourceClaimTemplate:
      metadata:
        name: "workload-{{ trigger.metadata.name }}"
        namespace: "project-{{ trigger.metadata.namespace }}"
      spec:
        consumerRef:
          apiGroup: resourcemanager.miloapis.com
          kind: Project
          name: "{{ trigger.metadata.namespace }}"
        requests:
          - resourceType: compute.datumapis.com/workloads
            amount: 1

Admission Policies

Validation policies are in config/services/resourcemanager.miloapis.com/validation/.

Customizing Project Name Validation

From validation/project-name-validation-policy.yaml:1:
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingAdmissionPolicy
metadata:
  name: "validate-project-name"
spec:
  validations:
    # Change minimum length
    - expression: "size(object.metadata.name) >= 6"  # Change 6 to desired min
      message: "Project name is too short..."
    
    # Change maximum length
    - expression: "size(object.metadata.name) <= 30"  # Change 30 to desired max
      message: "Project name is too long..."
    
    # Add/remove reserved words
    - expression: "!object.metadata.name.contains('datum')"
      message: "Project name contains reserved word..."

Monitoring Configuration

Enable Prometheus ServiceMonitor

From config/prometheus/kustomization.yaml:1:
resources:
  - monitor.yaml
Apply:
kubectl apply -k config/prometheus

Metrics Configuration

Metrics are exposed on port 8443 (HTTPS) by default:
ports:
  - name: https
    containerPort: 8443
    protocol: TCP
Access metrics:
# Port-forward to metrics endpoint
kubectl port-forward -n datum-system deployment/datum-controller-manager 8443:8443

# Query metrics (requires TLS)
curl -k https://localhost:8443/metrics

Security Configuration

Pod Security Context

From config/manager/manager.yaml:43:
securityContext:
  runAsNonRoot: true
  seccompProfile:
    type: RuntimeDefault

containers:
  - securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - "ALL"

Service Account

Datum runs with a dedicated service account:
kubectl get serviceaccount controller-manager -n datum-system
RBAC permissions are defined in config/rbac/.

High Availability Configuration

Multiple Replicas

# Scale to 3 replicas
kubectl scale deployment datum-controller-manager --replicas=3 -n datum-system
Leader election ensures only one active controller:
# Check leader
kubectl get lease -n datum-system

Resource Requests for HA

resources:
  requests:
    cpu: 100m      # Guaranteed CPU
    memory: 128Mi  # Guaranteed memory
  limits:
    cpu: 1000m     # Max CPU
    memory: 512Mi  # Max memory

Namespace Configuration

Datum uses the datum-system namespace by default (from config/default/kustomization.yaml:2):
namespace: datum-system
Change namespace:
kustomization.yaml
namespace: my-custom-namespace

resources:
  - github.com/datum-cloud/datum/config/default?ref=main

Image Configuration

Default image from config/manager/manager.yaml:108:
image: ghcr.io/datum-cloud/datum:latest
Use specific version:
cd config/manager
kustomize edit set image controller=ghcr.io/datum-cloud/datum:v0.1.0
Use private registry:
kustomize edit set image controller=my-registry.com/datum:custom

Logging Configuration

Datum uses structured logging. Configure log level:
env:
  - name: LOG_LEVEL
    value: "info"  # debug, info, warn, error
View logs:
kubectl logs -n datum-system -l control-plane=controller-manager --tail=100 -f

Best Practices

Use version tags

Pin to specific image versions in production, not latest.

Set resource limits

Always set requests and limits to prevent resource starvation.

Enable metrics

Configure Prometheus monitoring for observability.

Review RBAC

Audit service account permissions regularly.

Use HA

Run multiple replicas with leader election in production.

Backup etcd

Regular backups of Kubernetes etcd for disaster recovery.

Troubleshooting

Configuration not applied

# Verify deployment
kubectl get deployment datum-controller-manager -n datum-system -o yaml

# Check environment variables
kubectl get deployment datum-controller-manager -n datum-system -o jsonpath='{.spec.template.spec.containers[0].env}'

# Restart deployment
kubectl rollout restart deployment datum-controller-manager -n datum-system

Leader election issues

# Check lease
kubectl get lease -n datum-system

# Check logs for leader election messages
kubectl logs -n datum-system -l control-plane=controller-manager | grep leader

Quota policies not working

# Verify quota CRDs
kubectl get crds | grep quota

# Check policies
kubectl get grantcreationpolicies
kubectl get claimcreationpolicies
kubectl get resourceregistrations

# Reapply policies
kubectl apply -k config/services/resourcemanager.miloapis.com/quota

Next Steps

Operations

Learn operational procedures

Monitoring

Set up observability

Security

Security best practices

Quota Management

Advanced quota configuration

Build docs developers (and LLMs) love