Overview
VCVerifier can be deployed on Kubernetes using the official Helm chart provided by the i4Trust project. This provides a production-ready deployment with configurable resources, scaling, and integration options.
Prerequisites
Kubernetes cluster (v1.19+)
Helm 3.x installed
kubectl configured to access your cluster
Installation with Helm
Add the Helm repository
Add the i4Trust Helm repository: helm repo add i4trust https://i4trust.github.io/helm-charts/
helm repo update
Create a values file
Create a values.yaml file with your configuration: vcverifier :
# Image configuration
image :
repository : quay.io/fiware/vcverifier
tag : latest
pullPolicy : IfNotPresent
# Service configuration
service :
type : ClusterIP
port : 8080
# Ingress configuration
ingress :
enabled : true
className : nginx
hosts :
- host : verifier.example.com
paths :
- path : /
pathType : Prefix
tls :
- secretName : verifier-tls
hosts :
- verifier.example.com
# Resource limits
resources :
limits :
cpu : 500m
memory : 512Mi
requests :
cpu : 100m
memory : 128Mi
# VCVerifier configuration
config :
server :
port : 8080
templateDir : "views/"
staticDir : "views/static/"
logging :
level : "INFO"
jsonLogging : true
logRequests : true
verifier :
did : did:key:z6MkigCEnopwujz8Ten2dzq91nvMjqbKQYcifuZhqBsEkH7g
sessionExpiry : 30
generateKey : true
supportedModes :
- urlEncoded
- byReference
- byValue
configRepo :
configEndpoint : http://config-service:8080
Install the chart
Deploy VCVerifier to your cluster: helm install vcverifier i4trust/vcverifier \
-f values.yaml \
--namespace vcverifier \
--create-namespace
Verify the deployment
Check the deployment status: kubectl get pods -n vcverifier
kubectl get svc -n vcverifier
kubectl get ingress -n vcverifier
Configuration options
Basic configuration
Minimal setup
With external config service
With static configuration
vcverifier :
config :
verifier :
did : did:key:myverifier
tirAddress : https://tir-pdc.ebsi.fiware.dev
generateKey : true
Using ConfigMaps
Store configuration in a ConfigMap:
Create ConfigMap
kubectl create configmap vcverifier-config \
--from-file=server.yaml \
-n vcverifier
Reference in values.yaml
vcverifier :
configMap :
enabled : true
name : vcverifier-config
mountPath : /config
env :
- name : CONFIG_FILE
value : /config/server.yaml
Using Secrets for private keys
Store sensitive data in Kubernetes Secrets:
Create Secret
kubectl create secret generic vcverifier-keys \
--from-file=private-key.pem \
--from-file=certificate.pem \
-n vcverifier
Mount Secret in deployment
vcverifier :
secrets :
- name : vcverifier-keys
mountPath : /keys
items :
- key : private-key.pem
path : private-key.pem
- key : certificate.pem
path : certificate.pem
config :
verifier :
generateKey : false
keyPath : /keys/private-key.pem
clientIdentification :
keyPath : /keys/private-key.pem
certificatePath : /keys/certificate.pem
Ingress configuration
NGINX Ingress
ingress :
enabled : true
className : nginx
annotations :
cert-manager.io/cluster-issuer : letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect : "true"
nginx.ingress.kubernetes.io/force-ssl-redirect : "true"
hosts :
- host : verifier.example.com
paths :
- path : /
pathType : Prefix
tls :
- secretName : verifier-tls-cert
hosts :
- verifier.example.com
Traefik Ingress
ingress :
enabled : true
className : traefik
annotations :
traefik.ingress.kubernetes.io/router.entrypoints : websecure
traefik.ingress.kubernetes.io/router.tls : "true"
hosts :
- host : verifier.example.com
paths :
- path : /
pathType : Prefix
Ensure your DNS records point to your Kubernetes cluster’s ingress controller before configuring TLS certificates.
Scaling and high availability
Horizontal Pod Autoscaling
autoscaling :
enabled : true
minReplicas : 2
maxReplicas : 10
targetCPUUtilizationPercentage : 80
targetMemoryUtilizationPercentage : 80
Pod Disruption Budget
podDisruptionBudget :
enabled : true
minAvailable : 1
Anti-affinity rules
Distribute pods across nodes:
affinity :
podAntiAffinity :
preferredDuringSchedulingIgnoredDuringExecution :
- weight : 100
podAffinityTerm :
labelSelector :
matchExpressions :
- key : app.kubernetes.io/name
operator : In
values :
- vcverifier
topologyKey : kubernetes.io/hostname
Monitoring and observability
Health checks
Configure liveness and readiness probes:
livenessProbe :
httpGet :
path : /health
port : 8080
initialDelaySeconds : 30
periodSeconds : 10
timeoutSeconds : 5
failureThreshold : 3
readinessProbe :
httpGet :
path : /health
port : 8080
initialDelaySeconds : 10
periodSeconds : 5
timeoutSeconds : 3
failureThreshold : 3
Prometheus monitoring
Enable Prometheus metrics:
serviceMonitor :
enabled : true
interval : 30s
path : /metrics
labels :
prometheus : kube-prometheus
Logging
Configure structured JSON logging for log aggregation:
config :
logging :
level : "INFO"
jsonLogging : true
logRequests : true
pathsToSkip :
- /health
- /metrics
Integration with other services
Connect to Config Service
Deploy with the Credentials Config Service:
vcverifier :
config :
configRepo :
configEndpoint : http://credentials-config-service.vcverifier.svc.cluster.local:8080
# Also deploy config service
credentials-config-service :
enabled : true
image :
repository : quay.io/fiware/credentials-config-service
tag : "2.0.0"
Service mesh integration
For Istio integration:
podAnnotations :
sidecar.istio.io/inject : "true"
traffic.sidecar.istio.io/includeInboundPorts : "8080"
Upgrading
Upgrade an existing deployment:
helm upgrade vcverifier i4trust/vcverifier \
-f values.yaml \
-n vcverifier
Always test upgrades in a staging environment first. Review the changelog for breaking changes.
Uninstalling
Remove the deployment:
helm uninstall vcverifier -n vcverifier
To also remove the namespace:
kubectl delete namespace vcverifier
Complete production example
vcverifier :
# Image
image :
repository : quay.io/fiware/vcverifier
tag : "1.0.0"
pullPolicy : IfNotPresent
# Replicas
replicaCount : 3
# Resources
resources :
limits :
cpu : 1000m
memory : 1Gi
requests :
cpu : 200m
memory : 256Mi
# Autoscaling
autoscaling :
enabled : true
minReplicas : 3
maxReplicas : 10
targetCPUUtilizationPercentage : 75
# Health checks
livenessProbe :
httpGet :
path : /health
port : 8080
initialDelaySeconds : 30
periodSeconds : 10
readinessProbe :
httpGet :
path : /health
port : 8080
initialDelaySeconds : 10
periodSeconds : 5
# Ingress
ingress :
enabled : true
className : nginx
annotations :
cert-manager.io/cluster-issuer : letsencrypt-prod
nginx.ingress.kubernetes.io/rate-limit : "100"
hosts :
- host : verifier.prod.example.com
paths :
- path : /
pathType : Prefix
tls :
- secretName : verifier-prod-tls
hosts :
- verifier.prod.example.com
# Configuration
config :
server :
port : 8080
logging :
level : "INFO"
jsonLogging : true
logRequests : true
pathsToSkip :
- /health
verifier :
did : did:key:z6MkigCEnopwujz8Ten2dzq91nvMjqbKQYcifuZhqBsEkH7g
sessionExpiry : 30
generateKey : false
keyPath : /keys/private-key.pem
keyAlgorithm : ES256
supportedModes :
- byReference
clientIdentification :
id : did:key:z6MkigCEnopwujz8Ten2dzq91nvMjqbKQYcifuZhqBsEkH7g
keyPath : /keys/request-key.pem
requestKeyAlgorithm : ES256
configRepo :
configEndpoint : http://credentials-config-service:8080
# Secrets
secrets :
- name : vcverifier-keys
mountPath : /keys
# Monitoring
serviceMonitor :
enabled : true
interval : 30s
# Security
podSecurityContext :
runAsNonRoot : true
runAsUser : 1000
fsGroup : 1000
securityContext :
allowPrivilegeEscalation : false
capabilities :
drop :
- ALL
readOnlyRootFilesystem : true
Deploy with:
helm install vcverifier i4trust/vcverifier \
-f production-values.yaml \
-n vcverifier-prod \
--create-namespace
Troubleshooting
Check pod status
kubectl get pods -n vcverifier
kubectl describe pod < pod-nam e > -n vcverifier
kubectl logs < pod-nam e > -n vcverifier
Debug configuration
View the effective configuration:
kubectl exec -it < pod-nam e > -n vcverifier -- cat /app/server.yaml
Network issues
Test connectivity from within the pod:
kubectl exec -it < pod-nam e > -n vcverifier -- wget -O- http://localhost:8080/health
Next steps