Prerequisites
- Kubernetes cluster 1.24 or later (K3s, EKS, GKE, AKS, or self-managed)
kubectlconfigured to access your cluster- PostgreSQL database (can be deployed in-cluster or external)
- (Optional) Redis for distributed caching
- (Optional) Helm 3 for simplified deployment
Architecture Overview
When using Kubernetes as the container provider:- GZCTF runs as a Deployment in your cluster
- Challenge containers are deployed as Pods in a dedicated namespace (default:
gzctf-challenges) - Network policies control challenge container access
- Kubernetes ServiceAccounts provide RBAC for container management
Kubernetes Manifests
apiVersion: v1
kind: Namespace
metadata:
name: gzctf
---
apiVersion: v1
kind: Namespace
metadata:
name: gzctf-challenges
apiVersion: v1
kind: ServiceAccount
metadata:
name: gzctf
namespace: gzctf
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gzctf-container-manager
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "pods/status"]
verbs: ["get", "list", "watch", "create", "delete", "patch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "create", "delete", "patch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["networkpolicies"]
verbs: ["get", "list", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gzctf-container-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: gzctf-container-manager
subjects:
- kind: ServiceAccount
name: gzctf
namespace: gzctf
apiVersion: v1
kind: ConfigMap
metadata:
name: gzctf-config
namespace: gzctf
data:
GZCTF_ContainerProvider__Type: "Kubernetes"
GZCTF_ContainerProvider__PortMappingType: "Default"
GZCTF_ContainerProvider__KubernetesConfig__Namespace: "gzctf-challenges"
GZCTF_ContainerProvider__KubernetesConfig__KubeConfig: "incluster"
# Add DNS servers if needed
# GZCTF_ContainerProvider__KubernetesConfig__Dns__0: "8.8.8.8"
# GZCTF_ContainerProvider__KubernetesConfig__Dns__1: "8.8.4.4"
apiVersion: v1
kind: Secret
metadata:
name: gzctf-secret
namespace: gzctf
type: Opaque
stringData:
database-connection: "Host=postgres;Port=5432;Database=gzctf;Username=gzctf;Password=<your-password>"
# Optional: Redis connection
redis-connection: "redis:6379"
# Optional: Storage connection (S3)
storage-connection: "aws.s3://bucket=gzctf®ion=us-east-1&accessKey=<key>&secretKey=<secret>"
Replace
<your-password>, <key>, and <secret> with actual values. Never commit secrets to version control!apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: gzctf
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: gzctf
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16-alpine
env:
- name: POSTGRES_DB
value: gzctf
- name: POSTGRES_USER
value: gzctf
- name: POSTGRES_PASSWORD
value: <your-password>
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: gzctf
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
apiVersion: apps/v1
kind: Deployment
metadata:
name: gzctf
namespace: gzctf
spec:
replicas: 1
selector:
matchLabels:
app: gzctf
template:
metadata:
labels:
app: gzctf
spec:
serviceAccountName: gzctf
containers:
- name: gzctf
image: gztime/gzctf:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 8080
- name: metrics
containerPort: 3000
env:
- name: GZCTF_ConnectionStrings__Database
valueFrom:
secretKeyRef:
name: gzctf-secret
key: database-connection
- name: GZCTF_ConnectionStrings__RedisCache
valueFrom:
secretKeyRef:
name: gzctf-secret
key: redis-connection
optional: true
- name: GZCTF_ConnectionStrings__Storage
valueFrom:
secretKeyRef:
name: gzctf-secret
key: storage-connection
optional: true
envFrom:
- configMapRef:
name: gzctf-config
volumeMounts:
- name: files
mountPath: /app/files
- name: logs
mountPath: /app/logs
livenessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 30
periodSeconds: 30
readinessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 10
periodSeconds: 10
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
volumes:
- name: files
persistentVolumeClaim:
claimName: gzctf-files-pvc
- name: logs
emptyDir: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gzctf-files-pvc
namespace: gzctf
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: gzctf
namespace: gzctf
spec:
selector:
app: gzctf
ports:
- name: http
port: 8080
targetPort: 8080
- name: metrics
port: 3000
targetPort: 3000
type: ClusterIP
Kubernetes Configuration
TheKubernetesClient package is used for container management (see GZCTF.csproj:39). Configuration options:
In-Cluster Configuration
When running inside Kubernetes (recommended):/var/run/secrets/kubernetes.io/serviceaccount/token.
External kubeconfig
For external cluster management (not recommended for production):- Create a ConfigMap with your kubeconfig:
- Mount it in the deployment:
- Set the path:
Network Configuration
CIDR Restrictions
Restrict challenge container network access:Custom DNS
Override DNS servers for challenge containers:Storage Options
Using S3-Compatible Storage
For Kubernetes deployments, S3-compatible storage (like MinIO) is recommended:Using Persistent Volumes
For local file storage, ensure PVC supports ReadWriteMany if running multiple replicas:Monitoring and Observability
Prometheus Metrics
GZCTF exposes Prometheus metrics on port 3000:servicemonitor.yaml
Health Checks
The/healthz endpoint on port 3000 provides health status:
Scaling Considerations
Horizontal Pod Autoscaling
hpa.yaml
Troubleshooting
Check GZCTF Logs
Verify RBAC Permissions
Debug Challenge Containers
List challenge pods:Common Issues
Challenge containers won’t start:- Verify RBAC permissions are correctly configured
- Check that the
gzctf-challengesnamespace exists - Ensure the ServiceAccount is bound correctly
- Verify NetworkPolicies aren’t blocking required traffic
- Check DNS configuration
- Ensure ingress controller is properly configured
Next Steps
- Configuration Reference - Detailed configuration options
- Security Considerations - Security best practices
- Docker Deployment - Alternative deployment method