Load Balancing Layers
Layer 4 - Transport
TCP and UDP load balancing for Kubernetes Services
Layer 7 - Application
HTTP/HTTPS load balancing for Ingress and Gateway API resources
Layer 4 Load Balancing
Layer 4 load balancing operates at the transport layer, handling TCP and UDP traffic based on IP addresses and port numbers.Kubernetes Services
KubeLB automatically handles Kubernetes Services of typeLoadBalancer. When you create such a service in a tenant cluster, the KubeLB CCM:
- Detects the new LoadBalancer service
- Collects node addresses and NodePort information
- Creates a
LoadBalancerCRD in the management cluster - KubeLB Manager provisions the load balancer and configures Envoy
- Returns the external IP address to the tenant cluster
Protocol Support
KubeLB supports both TCP and UDP protocols:- TCP: Web servers, databases, SSH, and most application protocols
- UDP: DNS, VoIP, gaming servers, and streaming protocols
UDP health checks are not supported by Envoy. TCP health checks are automatically configured for TCP services.
LoadBalancer CRD
TheLoadBalancer CRD represents a Layer 4 load balancer configuration in the management cluster:
Traffic Flow (Layer 4)
The traffic path:- Client connects to the LoadBalancer IP (provided by cloud provider or MetalLB)
- Traffic hits the Envoy proxy in the management cluster
- Envoy routes traffic to tenant cluster nodes via NodePort
- Node’s kube-proxy forwards traffic to backend pods
External Traffic Policy
KubeLB supports both traffic policies:- Cluster (Default)
- Local
Traffic is distributed across all nodes in the cluster. This may cause an extra network hop but ensures better load distribution.
Health Checks
KubeLB automatically configures TCP health checks for Layer 4 services:Health checks perform connect-only checks (no payload) to verify endpoint availability.
Layer 7 Load Balancing
Layer 7 load balancing operates at the application layer, making routing decisions based on HTTP headers, paths, hostnames, and other application-level data.Supported Resources
KubeLB supports multiple Layer 7 resource types:Ingress
Ingress
Kubernetes Ingress resources for HTTP/HTTPS routing.
Gateway API - HTTPRoute
Gateway API - HTTPRoute
Gateway API HTTPRoute for advanced HTTP routing.
Gateway API - GRPCRoute
Gateway API - GRPCRoute
Gateway API GRPCRoute for gRPC services.
Route CRD
Layer 7 configurations are represented by theRoute CRD in the management cluster:
Traffic Flow (Layer 7)
The Layer 7 traffic path:- Client makes HTTPS request to Ingress/Gateway LoadBalancer
- Envoy Gateway or Ingress Controller terminates TLS
- Traffic is forwarded to KubeLB Envoy proxy service
- KubeLB Envoy routes to tenant cluster NodePort
- Tenant cluster forwards to backend pods
KubeLB acts as a transparent proxy for Layer 7 traffic. TLS termination is handled by the Ingress Controller or Envoy Gateway in the management cluster.
Protocol Handling
KubeLB automatically configures the appropriate protocol based on route type:- HTTP/Ingress
- gRPC
Uses HTTP/1.1 for Ingress and HTTPRoute resources. Envoy HTTP Connection Manager is configured with:
- Automatic HTTP/1.1 protocol
- Request ID generation
- X-Forwarded-For header handling
- Idle timeout (60s)
Hostname-Based Load Balancing
KubeLB supports hostname-based exposure for LoadBalancer services:- Creates a Route (Ingress or HTTPRoute) for the hostname
- Exposes the service over HTTP/HTTPS with TLS
- Can automatically create DNS records (if configured)
- Can provision TLS certificates (if cert-manager is available)
Load Balancing Algorithms
KubeLB uses Envoy’s load balancing algorithms. The default is Round Robin:Health and Readiness
Endpoint Health
KubeLB configures health checks to ensure traffic is only sent to healthy endpoints:- TCP Services: Connect-only TCP health checks
- UDP Services: No health checks (Envoy limitation)
- HTTP Services: HTTP health checks can be configured via Envoy Gateway/Ingress Controller
Unhealthy Threshold
Endpoints are marked unhealthy after:- 3 consecutive failed health checks
- 5-second interval between checks
- 5-second timeout per check
Panic Threshold
KubeLB sets the panic threshold to 0%, meaning Envoy will continue routing to all endpoints even if all are marked unhealthy:Advanced Features
Proxy Protocol
KubeLB supports PROXY protocol v2 for preserving client IP addresses:Connection Management
KubeLB configures connection limits to prevent resource exhaustion:- Max concurrent streams: 1,000,000 for gRPC connections
- Buffer limits: 32KB per connection
- Circuit breakers: Configurable connection and request limits
- TCP keepalive: Enabled for xDS connections
Performance Considerations
Endpoint Count
Endpoint Count
KubeLB can handle thousands of endpoints across multiple clusters. The xDS server efficiently pushes updates only when configurations change.
Connection Pooling
Connection Pooling
Envoy maintains connection pools to backend services, reducing latency and improving throughput.
HTTP/2 for xDS
HTTP/2 for xDS
The xDS control plane uses HTTP/2 for efficient, multiplexed configuration updates to Envoy proxies.
Incremental xDS
Incremental xDS
Envoy receives only the configuration changes that affect it, not the entire configuration every time.
Next Steps
Envoy Topology
Learn about Envoy proxy deployment topologies
Multi-Tenancy
Understand tenant isolation and configuration
Layer 4 Guide
Configure Layer 4 load balancing
Layer 7 Guide
Set up Ingress and Gateway API
