Skip to main content

What is KubeLB?

KubeLB is a Kubernetes-native tool by Kubermatic that provides centralized management of Layer 4 and Layer 7 load balancing configurations for Kubernetes clusters across multi-cloud and on-premise environments.

The Problem

Kubernetes doesn’t provide a built-in implementation for load balancers. Instead, it relies on cloud provider implementations to provision and manage load balancers. This creates significant challenges:

Bare-Metal Limitations

Services of type LoadBalancer never receive an IP address without cloud provider support or additional tools

Multi-Cluster Complexity

Solutions like MetalLB and Cilium work well for single clusters but require separate configuration per cluster

IP Management Challenges

Managing IP addresses across multiple clusters becomes complex without centralized control

L7 Tooling Overhead

Application load balancing requires additional tools in each cluster (nginx-ingress, Envoy Gateway) plus separate DNS, TLS, and WAF management

The KubeLB Solution

KubeLB solves these challenges by providing a centralized management solution that manages the data plane for multiple Kubernetes clusters. This enables you to:
  • Manage a fleet of Kubernetes clusters from a single control point
  • Ensure security compliance and enforce policies consistently
  • Provide a unified experience for developers across all clusters
  • Simplify IP address management across your infrastructure

Key Features

Centralized Load Balancer Management

Manage both Layer 4 (TCP/UDP) and Layer 7 (HTTP/HTTPS) load balancing from a single management cluster.

Hub-and-Spoke Architecture

The “Management Cluster” acts as the hub, while “Tenant Clusters” act as spokes. Information flows from tenant clusters to the management cluster, which then configures and deploys load balancers.

Multi-Tenant Isolation

Tenants have no access to native Kubernetes resources in the management cluster. They interact only via KubeLB CRDs, ensuring controlled operations and security isolation.

Envoy-Powered Traffic Routing

KubeLB uses Envoy Proxy to route traffic to the appropriate endpoints (node ports on tenant cluster nodes), with flexible deployment topologies:
Topology Options:
  • Shared (default): Single Envoy proxy per tenant cluster
  • Global: Single Envoy proxy for all tenant clusters
  • Dedicated (deprecated in v1.1.0): One Envoy proxy per load balancer service

Cloud Provider Compatibility

Works with any cloud provider’s LoadBalancer implementation or self-managed solutions like MetalLB.

Gateway API Support

Full support for Kubernetes Gateway API resources (Gateway, HTTPRoute, GRPCRoute) alongside traditional Ingress resources.

Use Cases

Bare-Metal Kubernetes

Provide LoadBalancer services to on-premise clusters without cloud provider support

Multi-Cloud Management

Unify load balancer management across AWS, Azure, GCP, and on-premise environments

Edge Computing

Centrally manage load balancing for distributed edge Kubernetes clusters

Cost Optimization

Reduce infrastructure costs by sharing load balancing resources across multiple clusters

How It Works

1

Deploy KubeLB Manager

Install the KubeLB Manager in your management cluster, which hosts the Envoy xDS server and load balancer configuration
2

Deploy KubeLB CCM

Install the KubeLB CCM (Cloud Controller Manager) in each tenant cluster that needs load balancing services
3

Create LoadBalancer Services

Create services of type LoadBalancer in your tenant clusters as you normally would
4

Automatic Propagation

The CCM watches for services, nodes, and routing resources, then propagates configurations to the management cluster
5

Traffic Routing

The management cluster configures Envoy Proxy to route traffic through node ports to your tenant cluster backends

Traffic Flow

Layer 4 (LoadBalancer Services)

Client → LoadBalancer Service → KubeLB Envoy Proxy → Tenant Node:NodePort → Pod

Layer 7 (Ingress/Gateway API)

Client → Envoy Gateway LB → KubeLB Envoy Proxy (xDS) → Tenant Node:NodePort → Pod
KubeLB requires network access from the management cluster to tenant cluster nodes within the NodePort range (default: 30000-32767).

Next Steps

Architecture

Understand the hub-and-spoke model and key components

Quick Start

Get started with a basic KubeLB installation

Build docs developers (and LLMs) love