Skip to main content

Overview

VCVerifier can be deployed on Kubernetes using the official Helm chart provided by the i4Trust project. This provides a production-ready deployment with configurable resources, scaling, and integration options.

Prerequisites

  • Kubernetes cluster (v1.19+)
  • Helm 3.x installed
  • kubectl configured to access your cluster

Installation with Helm

1

Add the Helm repository

Add the i4Trust Helm repository:
helm repo add i4trust https://i4trust.github.io/helm-charts/
helm repo update
2

Create a values file

Create a values.yaml file with your configuration:
values.yaml
vcverifier:
  # Image configuration
  image:
    repository: quay.io/fiware/vcverifier
    tag: latest
    pullPolicy: IfNotPresent

  # Service configuration
  service:
    type: ClusterIP
    port: 8080

  # Ingress configuration
  ingress:
    enabled: true
    className: nginx
    hosts:
      - host: verifier.example.com
        paths:
          - path: /
            pathType: Prefix
    tls:
      - secretName: verifier-tls
        hosts:
          - verifier.example.com

  # Resource limits
  resources:
    limits:
      cpu: 500m
      memory: 512Mi
    requests:
      cpu: 100m
      memory: 128Mi

  # VCVerifier configuration
  config:
    server:
      port: 8080
      templateDir: "views/"
      staticDir: "views/static/"
    
    logging:
      level: "INFO"
      jsonLogging: true
      logRequests: true
    
    verifier:
      did: did:key:z6MkigCEnopwujz8Ten2dzq91nvMjqbKQYcifuZhqBsEkH7g
      sessionExpiry: 30
      generateKey: true
      supportedModes:
        - urlEncoded
        - byReference
        - byValue
    
    configRepo:
      configEndpoint: http://config-service:8080
3

Install the chart

Deploy VCVerifier to your cluster:
helm install vcverifier i4trust/vcverifier \
  -f values.yaml \
  --namespace vcverifier \
  --create-namespace
4

Verify the deployment

Check the deployment status:
kubectl get pods -n vcverifier
kubectl get svc -n vcverifier
kubectl get ingress -n vcverifier

Configuration options

Basic configuration

vcverifier:
  config:
    verifier:
      did: did:key:myverifier
      tirAddress: https://tir-pdc.ebsi.fiware.dev
      generateKey: true

Using ConfigMaps

Store configuration in a ConfigMap:
1

Create ConfigMap

kubectl create configmap vcverifier-config \
  --from-file=server.yaml \
  -n vcverifier
2

Reference in values.yaml

values.yaml
vcverifier:
  configMap:
    enabled: true
    name: vcverifier-config
    mountPath: /config
  
  env:
    - name: CONFIG_FILE
      value: /config/server.yaml

Using Secrets for private keys

Store sensitive data in Kubernetes Secrets:
1

Create Secret

kubectl create secret generic vcverifier-keys \
  --from-file=private-key.pem \
  --from-file=certificate.pem \
  -n vcverifier
2

Mount Secret in deployment

values.yaml
vcverifier:
  secrets:
    - name: vcverifier-keys
      mountPath: /keys
      items:
        - key: private-key.pem
          path: private-key.pem
        - key: certificate.pem
          path: certificate.pem
  
  config:
    verifier:
      generateKey: false
      keyPath: /keys/private-key.pem
      clientIdentification:
        keyPath: /keys/private-key.pem
        certificatePath: /keys/certificate.pem

Ingress configuration

NGINX Ingress

values.yaml
ingress:
  enabled: true
  className: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  hosts:
    - host: verifier.example.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: verifier-tls-cert
      hosts:
        - verifier.example.com

Traefik Ingress

values.yaml
ingress:
  enabled: true
  className: traefik
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
    traefik.ingress.kubernetes.io/router.tls: "true"
  hosts:
    - host: verifier.example.com
      paths:
        - path: /
          pathType: Prefix
Ensure your DNS records point to your Kubernetes cluster’s ingress controller before configuring TLS certificates.

Scaling and high availability

Horizontal Pod Autoscaling

values.yaml
autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 80
  targetMemoryUtilizationPercentage: 80

Pod Disruption Budget

values.yaml
podDisruptionBudget:
  enabled: true
  minAvailable: 1

Anti-affinity rules

Distribute pods across nodes:
values.yaml
affinity:
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
              - key: app.kubernetes.io/name
                operator: In
                values:
                  - vcverifier
          topologyKey: kubernetes.io/hostname

Monitoring and observability

Health checks

Configure liveness and readiness probes:
values.yaml
livenessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 3

readinessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 5
  timeoutSeconds: 3
  failureThreshold: 3

Prometheus monitoring

Enable Prometheus metrics:
values.yaml
serviceMonitor:
  enabled: true
  interval: 30s
  path: /metrics
  labels:
    prometheus: kube-prometheus

Logging

Configure structured JSON logging for log aggregation:
values.yaml
config:
  logging:
    level: "INFO"
    jsonLogging: true
    logRequests: true
    pathsToSkip:
      - /health
      - /metrics

Integration with other services

Connect to Config Service

Deploy with the Credentials Config Service:
values.yaml
vcverifier:
  config:
    configRepo:
      configEndpoint: http://credentials-config-service.vcverifier.svc.cluster.local:8080

# Also deploy config service
credentials-config-service:
  enabled: true
  image:
    repository: quay.io/fiware/credentials-config-service
    tag: "2.0.0"

Service mesh integration

For Istio integration:
values.yaml
podAnnotations:
  sidecar.istio.io/inject: "true"
  traffic.sidecar.istio.io/includeInboundPorts: "8080"

Upgrading

Upgrade an existing deployment:
helm upgrade vcverifier i4trust/vcverifier \
  -f values.yaml \
  -n vcverifier
Always test upgrades in a staging environment first. Review the changelog for breaking changes.

Uninstalling

Remove the deployment:
helm uninstall vcverifier -n vcverifier
To also remove the namespace:
kubectl delete namespace vcverifier

Complete production example

production-values.yaml
vcverifier:
  # Image
  image:
    repository: quay.io/fiware/vcverifier
    tag: "1.0.0"
    pullPolicy: IfNotPresent

  # Replicas
  replicaCount: 3

  # Resources
  resources:
    limits:
      cpu: 1000m
      memory: 1Gi
    requests:
      cpu: 200m
      memory: 256Mi

  # Autoscaling
  autoscaling:
    enabled: true
    minReplicas: 3
    maxReplicas: 10
    targetCPUUtilizationPercentage: 75

  # Health checks
  livenessProbe:
    httpGet:
      path: /health
      port: 8080
    initialDelaySeconds: 30
    periodSeconds: 10

  readinessProbe:
    httpGet:
      path: /health
      port: 8080
    initialDelaySeconds: 10
    periodSeconds: 5

  # Ingress
  ingress:
    enabled: true
    className: nginx
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
      nginx.ingress.kubernetes.io/rate-limit: "100"
    hosts:
      - host: verifier.prod.example.com
        paths:
          - path: /
            pathType: Prefix
    tls:
      - secretName: verifier-prod-tls
        hosts:
          - verifier.prod.example.com

  # Configuration
  config:
    server:
      port: 8080
    
    logging:
      level: "INFO"
      jsonLogging: true
      logRequests: true
      pathsToSkip:
        - /health
    
    verifier:
      did: did:key:z6MkigCEnopwujz8Ten2dzq91nvMjqbKQYcifuZhqBsEkH7g
      sessionExpiry: 30
      generateKey: false
      keyPath: /keys/private-key.pem
      keyAlgorithm: ES256
      supportedModes:
        - byReference
      clientIdentification:
        id: did:key:z6MkigCEnopwujz8Ten2dzq91nvMjqbKQYcifuZhqBsEkH7g
        keyPath: /keys/request-key.pem
        requestKeyAlgorithm: ES256
    
    configRepo:
      configEndpoint: http://credentials-config-service:8080

  # Secrets
  secrets:
    - name: vcverifier-keys
      mountPath: /keys

  # Monitoring
  serviceMonitor:
    enabled: true
    interval: 30s

  # Security
  podSecurityContext:
    runAsNonRoot: true
    runAsUser: 1000
    fsGroup: 1000

  securityContext:
    allowPrivilegeEscalation: false
    capabilities:
      drop:
        - ALL
    readOnlyRootFilesystem: true
Deploy with:
helm install vcverifier i4trust/vcverifier \
  -f production-values.yaml \
  -n vcverifier-prod \
  --create-namespace

Troubleshooting

Check pod status

kubectl get pods -n vcverifier
kubectl describe pod <pod-name> -n vcverifier
kubectl logs <pod-name> -n vcverifier

Debug configuration

View the effective configuration:
kubectl exec -it <pod-name> -n vcverifier -- cat /app/server.yaml

Network issues

Test connectivity from within the pod:
kubectl exec -it <pod-name> -n vcverifier -- wget -O- http://localhost:8080/health

Next steps

Build docs developers (and LLMs) love