The GovTech platform implements Zero Trust Network Architecture at the pod level using Kubernetes NetworkPolicies:
Default Kubernetes Behavior:By default, ALL pods in a Kubernetes cluster can communicate with each other without restriction. This is a significant security risk. If one pod is compromised, an attacker can pivot to any other pod in the cluster.Our Approach: Deny all traffic by default, then explicitly allow only necessary communication paths.
The most important policy: deny all traffic by default.
network-policies.yaml
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: default-deny-all namespace: govtech labels: app: govtech security: network-policyspec: podSelector: {} # Applies to ALL pods in namespace policyTypes: - Ingress # Block incoming traffic to all pods - Egress # Block outgoing traffic from all pods
This policy blocks all traffic to and from all pods in the govtech namespace. Subsequent policies create exceptions by explicitly allowing specific traffic.
HTTP requests from Ingress Controller on port 3000 (for /api/* direct routing)
Denied:
Direct access from database
Direct access from internet
Access from other namespaces
Allowed:
PostgreSQL connections to database pods on port 5432
HTTPS (port 443) to AWS services (public IPs only)
DNS queries
Denied:
HTTP (port 80) to internet (only HTTPS allowed)
Connections to private IP ranges (except DNS)
Direct communication with frontend
AWS API Access:The backend needs to access AWS Secrets Manager and potentially S3. This requires HTTPS (port 443) egress to public AWS endpoints. Traffic routes through the NAT Gateway in the VPC.Use IRSA (IAM Roles for Service Accounts) so pods don’t need hardcoded AWS credentials.
# Get frontend pod nameFRONTEND_POD=$(kubectl get pods -n govtech -l app=frontend -o jsonpath='{.items[0].metadata.name}')# Test connection to backend servicekubectl exec -it $FRONTEND_POD -n govtech -- curl -s http://backend-service:3000/health# Expected: {"status":"healthy"}
2
Test Frontend → Database (Should Fail)
# Frontend should NOT be able to access database directlykubectl exec -it $FRONTEND_POD -n govtech -- curl -s --connect-timeout 5 postgres-service:5432# Expected: Connection timeout or connection refused
This should fail because the NetworkPolicy blocks frontend→database communication.
3
Test Backend → Database (Should Succeed)
# Get backend pod nameBACKEND_POD=$(kubectl get pods -n govtech -l app=backend -o jsonpath='{.items[0].metadata.name}')# Test PostgreSQL connectionkubectl exec -it $BACKEND_POD -n govtech -- psql -h postgres-service -U govtech_admin -d govtech -c "SELECT version();"# Expected: PostgreSQL version information
4
Test Database → Internet (Should Fail)
# Database should NOT be able to reach internetDB_POD=$(kubectl get pods -n govtech -l app=postgres -o jsonpath='{.items[0].metadata.name}')kubectl exec -it $DB_POD -n govtech -- curl -s --connect-timeout 5 https://www.google.com# Expected: Connection timeout
Symptom: Services that worked before now show connection timeouts.Diagnosis:
# Check if NetworkPolicies existkubectl get networkpolicies -n govtech# Verify pod labels match policy selectorskubectl get pods -n govtech --show-labels
Common Causes:
Pod labels don’t match policy podSelector
Missing DNS egress rule (pods can’t resolve service names)
Wrong port number in policy
Solution:
# Ensure pods have correct labelskubectl label pod <pod-name> -n govtech app=backend --overwrite# Verify DNS egress is allowedkubectl describe networkpolicy backend-network-policy -n govtech | grep -A5 "Egress"
Backend Cannot Access AWS APIs
Symptom: Backend logs show connection timeouts when calling AWS Secrets Manager or S3.Diagnosis:
# Check if HTTPS egress is allowedkubectl exec -it <backend-pod> -n govtech -- curl -v https://secretsmanager.us-east-1.amazonaws.com
Symptom: Policies exist but all pods can still communicate freely.Cause: The CNI (Container Network Interface) doesn’t support NetworkPolicies.Diagnosis:
egress: # Allow only Secrets Manager in us-east-1 - to: - ipBlock: # IP range for secretsmanager.us-east-1.amazonaws.com cidr: 52.46.128.0/19 ports: - protocol: TCP port: 443
Use nslookup secretsmanager.us-east-1.amazonaws.com to get the IP range. AWS service IPs can change, so use broad CIDR blocks or allow all AWS IPs.