Overview
The Kubernetes Exchange Cluster is a production-grade, cloud-native infrastructure designed for running modern exchange platforms on Google Cloud Platform (GKE). This architecture provides security, scalability, and resilience for high-frequency trading platforms, crypto exchanges, and real-time financial applications.
This architecture is battle-tested for exchange applications requiring high availability, secure secret management, and automatic scaling under load.
Core Components
NGINX Ingress Controller
The NGINX Ingress Controller serves as the primary entry point for all external traffic into the cluster.
Key Features:
HTTPS termination with automatic certificate management
Path-based routing to multiple backend services
URL rewriting and request routing
Integration with cert-manager for TLS certificates
Installation:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.2/deploy/static/provider/cloud/deploy.yaml
Configuration Example:
The ingress configuration routes traffic to backend services with path-based routing:
apiVersion : networking.k8s.io/v1
kind : Ingress
metadata :
name : ingress-nginx
annotations :
nginx.ingress.kubernetes.io/rewrite-target : /$2
cert-manager.io/cluster-issuer : letsencrypt-prod
spec :
ingressClassName : nginx
tls :
- hosts :
- exchange.jogeshwar.xyz
secretName : exchange-tls
rules :
- host : exchange.jogeshwar.xyz
http :
paths :
- path : /backend(/|$)(.*)
pathType : ImplementationSpecific
backend :
service :
name : exchange-router-service
port :
number : 80
- path : /ws
pathType : ImplementationSpecific
backend :
service :
name : exchange-ws-stream-service
port :
number : 80
The rewrite-target annotation /$2 strips the /backend prefix before forwarding requests to the backend service. The regex pattern /backend(/|$)(.*) captures the remaining path in $2.
Load Balancers
GKE automatically provisions Cloud Load Balancers when services of type LoadBalancer are created. The NGINX Ingress Controller uses a LoadBalancer service to expose itself externally.
Benefits:
High availability across multiple zones
Automatic health checking
Seamless scaling as traffic increases
Integration with Google Cloud CDN
Sealed Secrets
Sealed Secrets by Bitnami enable GitOps-friendly secret management by encrypting secrets that can be safely stored in Git repositories.
How It Works:
Secrets are encrypted using the cluster’s public key
Encrypted secrets can be committed to Git
The sealed-secrets controller decrypts them in-cluster
Standard Kubernetes Secrets are created automatically
Installation:
# Install the Helm chart
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
helm install sealed-secrets -n kube-system \
--set-string fullnameOverride=sealed-secrets-controller \
sealed-secrets/sealed-secrets
# Install kubeseal CLI
KUBESEAL_VERSION = '0.29.0'
curl -OL "https://github.com/bitnami-labs/sealed-secrets/releases/download/v${ KUBESEAL_VERSION }/kubeseal-${ KUBESEAL_VERSION }-linux-amd64.tar.gz"
tar -xvzf kubeseal- ${ KUBESEAL_VERSION } -linux-amd64.tar.gz kubeseal
sudo install -m 755 kubeseal /usr/local/bin/kubeseal
Creating Sealed Secrets:
# Create a sealed secret from a regular secret
kubeseal --format yaml < sample-secret.yml > sealed_secret.yml
# Apply the sealed secret
kubectl apply -f sealed_secret.yml
# Verify the secret was created
kubectl get secret exchange-router-secret -o yaml
Sealed secrets are namespace and name-specific. The decrypted Secret must have the same name and namespace as the SealedSecret.
cert-manager
cert-manager automates certificate management and renewal using Let’s Encrypt or other ACME providers.
Installation:
Install cert-manager using Helm:
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set installCRDs= true
ClusterIssuer Configuration:
apiVersion : cert-manager.io/v1
kind : ClusterIssuer
metadata :
name : letsencrypt-prod
spec :
acme :
email : [email protected]
server : https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef :
name : letsencrypt-prod
solvers :
- http01 :
ingress :
class : nginx
Certificate Management:
Certificates are automatically issued when referenced in Ingress annotations:
annotations :
cert-manager.io/cluster-issuer : letsencrypt-prod
Verification Commands:
# Check ClusterIssuer status
kubectl get clusterissuer
kubectl describe clusterissuer letsencrypt-prod
# Check certificate
kubectl get certificate
kubectl describe certificate exchange-cert
# Inspect the TLS secret
kubectl get secret exchange-tls -o yaml
Persistent Volume Claims (PVCs)
PVCs provide persistent storage for stateful components like databases, ensuring data survives pod restarts.
PostgreSQL PVC:
apiVersion : v1
kind : PersistentVolumeClaim
metadata :
name : postgres-pvc
spec :
accessModes :
- ReadWriteOnce
resources :
requests :
storage : 5Gi
storageClassName : standard-rwo
volumeMode : Filesystem
Redis PVC:
apiVersion : v1
kind : PersistentVolumeClaim
metadata :
name : redis-pvc
spec :
accessModes :
- ReadWriteOnce
resources :
requests :
storage : 5Gi
storageClassName : standard-rwo
volumeMode : Filesystem
The standard-rwo storage class on GKE provides persistent disks with ReadWriteOnce access mode, suitable for single-pod stateful workloads.
Horizontal Pod Autoscaling (HPA)
HPA automatically scales pods based on observed CPU utilization or custom metrics.
Exchange Router HPA Configuration:
apiVersion : autoscaling/v2
kind : HorizontalPodAutoscaler
metadata :
name : exchange-router-hpa
spec :
scaleTargetRef :
apiVersion : apps/v1
kind : Deployment
name : exchange-router-deployment
minReplicas : 1
maxReplicas : 2
metrics :
- type : Resource
resource :
name : cpu
target :
type : Utilization
averageUtilization : 95
How It Works:
Monitors CPU utilization across pods
Scales up when average CPU exceeds 95%
Scales down when CPU usage drops
Maintains between 1-2 replicas
For production environments, consider increasing maxReplicas and using custom metrics (e.g., request rate, queue depth) in addition to CPU.
Monitoring with Prometheus & Grafana
The cluster includes comprehensive monitoring with the kube-prometheus-stack for observability.
Installation:
# Create monitoring namespace
kubectl create namespace monitoring
# Add Prometheus community Helm repository
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
# Install kube-prometheus-stack
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
Accessing Grafana:
# Get admin password
kubectl --namespace monitoring get secrets prometheus-grafana \
-o jsonpath="{.data.admin-password}" | base64 -d ; echo
# Port forward to access UI
export POD_NAME = $( kubectl --namespace monitoring get pod \
-l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=prometheus" -oname )
kubectl --namespace monitoring port-forward $POD_NAME 3000
Accessing Prometheus:
kubectl port-forward svc/prometheus-kube-prometheus-prometheus 9090:9090 -n monitoring
Monitoring Capabilities:
Pod & Node Monitoring (CPU, memory, network, disk)
Service Health (uptime, response times, error rates)
Custom Alerting with Prometheus Alertmanager
Pre-built Grafana dashboards for workloads and system components
Application Components
The exchange application consists of multiple microservices:
Backend Router
Purpose: Main API gateway handling HTTP requests
apiVersion : apps/v1
kind : Deployment
metadata :
name : exchange-router-deployment
spec :
replicas : 1
selector :
matchLabels :
app : exchange-router
template :
metadata :
labels :
app : exchange-router
spec :
containers :
- name : exchange-router
image : jogeshwar01/exchange-router:ed9f044dc79ee713da9518648524e0c68a70ddf7
ports :
- containerPort : 8080
resources :
requests :
cpu : "300m"
limits :
cpu : "2000m"
env :
- name : SERVER_ADDR
valueFrom :
secretKeyRef :
name : exchange-router-secret
key : server_addr
- name : DATABASE_URL
valueFrom :
secretKeyRef :
name : exchange-router-secret
key : database_url
- name : REDIS_URL
valueFrom :
secretKeyRef :
name : exchange-router-secret
key : redis_url
WebSocket Stream
Purpose: Real-time WebSocket connections for market data
apiVersion : apps/v1
kind : Deployment
metadata :
name : exchange-ws-stream-deployment
spec :
replicas : 1
selector :
matchLabels :
app : exchange-ws-stream
template :
metadata :
labels :
app : exchange-ws-stream
spec :
containers :
- name : exchange-ws-stream
image : jogeshwar01/exchange-ws-stream:ed9f044dc79ee713da9518648524e0c68a70ddf7
ports :
- containerPort : 4000
Trading Engine
Purpose: Core order matching engine
apiVersion : apps/v1
kind : Deployment
metadata :
name : exchange-engine-deployment
spec :
replicas : 1
selector :
matchLabels :
app : exchange-engine
template :
metadata :
labels :
app : exchange-engine
spec :
containers :
- name : exchange-engine
image : jogeshwar01/exchange-engine:ed9f044dc79ee713da9518648524e0c68a70ddf7
DB Processor
Purpose: Asynchronous database operations and trade settlement
apiVersion : apps/v1
kind : Deployment
metadata :
name : exchange-db-processor-deployment
spec :
replicas : 1
selector :
matchLabels :
app : exchange-db-processor
template :
metadata :
labels :
app : exchange-db-processor
spec :
containers :
- name : exchange-db-processor
image : jogeshwar01/exchange-db-processor:ed9f044dc79ee713da9518648524e0c68a70ddf7
PostgreSQL Database
Purpose: Primary data store for orders, users, and trade history
apiVersion : apps/v1
kind : Deployment
metadata :
name : exchange-postgres-deployment
spec :
replicas : 1
selector :
matchLabels :
app : exchange-postgres
template :
metadata :
labels :
app : exchange-postgres
spec :
containers :
- name : exchange-postgres
image : postgres:12.2
ports :
- containerPort : 5432
volumeMounts :
- mountPath : /var/lib/postgresql/data
subPath : postgres-data
name : postgres-storage
volumes :
- name : postgres-storage
persistentVolumeClaim :
claimName : postgres-pvc
Redis Cache
Purpose: In-memory data store for order books and session management
apiVersion : apps/v1
kind : Deployment
metadata :
name : exchange-redis-deployment
spec :
replicas : 1
selector :
matchLabels :
app : exchange-redis
template :
metadata :
labels :
app : exchange-redis
spec :
containers :
- name : exchange-redis
image : redis:6.2-alpine
ports :
- containerPort : 6379
volumeMounts :
- mountPath : /data
subPath : redis-data
name : redis-storage
volumes :
- name : redis-storage
persistentVolumeClaim :
claimName : redis-pvc
Component Interactions
Request Flow
External Request
User sends HTTPS request to exchange.jogeshwar.xyz
Load Balancer
GKE Load Balancer receives request and forwards to NGINX Ingress
TLS Termination
NGINX Ingress terminates TLS using certificate from cert-manager
Path Routing
Based on path:
/backend/* routes to exchange-router-service
/ws routes to exchange-ws-stream-service
Service Processing
Backend service processes request, accessing:
PostgreSQL for persistent data
Redis for cached data and pub/sub
Response
Response flows back through NGINX to client
Trading Flow
Order Submission
Client submits order via Backend Router
Order Validation
Backend Router validates and publishes to Redis
Order Matching
Trading Engine consumes from Redis and matches orders
Trade Execution
Matched trades published back to Redis
Database Persistence
DB Processor consumes trades and persists to PostgreSQL
Real-time Updates
WebSocket Stream publishes updates to connected clients
Security Best Practices
Secret Management
All secrets encrypted with Sealed Secrets
Secrets never stored in plain text in Git
Automatic rotation supported via kubeseal
Network Security
All external traffic uses HTTPS with valid certificates
Internal service-to-service communication via ClusterIP
Network policies can be added for additional isolation
Access Control
GKE IAM integration for cluster access
RBAC for fine-grained permissions
Service accounts with minimal privileges
Data Protection
Persistent volumes for stateful data
Regular backups recommended for PostgreSQL
Data encrypted at rest on GKE persistent disks
Scalability Features
Horizontal Scaling
HPA for automatic pod scaling based on metrics
Stateless services can scale independently
Load balancing across multiple replicas
Vertical Scaling
Resource requests and limits defined per container
GKE node auto-scaling adjusts cluster capacity
Node pools for different workload types
Storage Scaling
PVCs can be expanded without downtime
Multiple storage classes available (SSD, standard)
Regional persistent disks for high availability
Exchange Application Repository
The exchange application source code is available at:
Repository: jogeshwar01/exchange
This repository contains:
Backend router (API gateway)
WebSocket stream server
Trading engine
Database processor
Frontend application
Local development setup
Next Steps
Quickstart Guide Deploy the cluster step-by-step
Component Configuration Customize individual components