A virtual cluster is a fully functional Kubernetes cluster that runs inside a namespace of a host Kubernetes cluster (or standalone on bare metal/VMs). It provides complete API isolation with its own API server, control plane, and data store, while sharing underlying infrastructure in various ways depending on the deployment architecture.
What Makes a Cluster “Virtual”?
A virtual cluster appears to be a complete, independent Kubernetes cluster from the user’s perspective:
Complete Kubernetes API server
Own set of nodes (real or virtual)
Full RBAC and authentication
Independent CRD installation
Isolated network policies and storage
However, the actual workload execution and infrastructure can be shared with a host cluster or run on dedicated infrastructure.
Core Architecture
Control Plane Components
Every vCluster includes these essential components:
┌─────────────────────────────────────────────┐
│ Virtual Cluster Control Plane │
├─────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌───────────────┐ │
│ │ API Server │ │ Data Store │ │
│ │ │◄────►│ (etcd/SQL) │ │
│ └──────┬───────┘ └───────────────┘ │
│ │ │
│ │ ┌───────────────┐ │
│ └─────────────►│ Syncer │ │
│ │ │ │
│ └───────┬───────┘ │
│ │ │
└────────────────────────────────┼───────────┘
│
▼
Infrastructure Layer
(Shared/Dedicated/Private/None)
API Server
The API server is the heart of the virtual cluster:
Full Kubernetes API : Supports all Kubernetes APIs (v1.19+)
Authentication : Manages its own users, service accounts, and tokens
Authorization : Independent RBAC policies
Admission Control : Can run webhooks and policy engines
API Extensions : Supports CRDs and API aggregation
Shared/Dedicated Nodes
Full Kubernetes Distro
controlPlane :
distro :
k8s :
enabled : false # Uses lightweight K3s by default
Data Store
The data store persists the virtual cluster’s state. Multiple options are available:
controlPlane :
backingStore :
database :
embedded :
enabled : true
Pros:
Zero configuration
No external dependencies
Smallest resource footprint
Fast for single-replica setups
Cons:
Single replica only (no HA)
Not suitable for production at scale
controlPlane :
backingStore :
etcd :
embedded :
enabled : true
statefulSet :
highAvailability :
replicas : 3 # For HA
Pros:
High availability support
Native Kubernetes backing store
Better performance at scale
Cons:
Higher resource usage
More complex setup
controlPlane :
backingStore :
database :
external :
enabled : true
dataSource : "postgres://user:pass@host:5432/vcluster"
Pros:
Managed database services (RDS, Cloud SQL)
Simplified backup and recovery
Separate lifecycle from vCluster
Cons:
External dependency
Network latency
Additional cost
controlPlane :
backingStore :
etcd :
deploy :
enabled : true
statefulSet :
highAvailability :
replicas : 3
Pros:
Full control over etcd cluster
Can be shared across vClusters
Better for large-scale deployments
Cons:
Most complex setup
Requires etcd expertise
Syncer
The syncer is vCluster’s unique component that bridges the virtual and physical worlds:
Key Responsibilities:
Resource Translation : Converts virtual resources to physical ones
Virtual: default/nginx
Physical: vcluster-my-vcluster-x-default-nginx-abc123
Bidirectional Sync : Watches both virtual and physical resources
Virtual → Physical: Workload resources (pods, services, PVCs)
Physical → Virtual: Infrastructure resources (nodes, storage classes)
Status Updates : Syncs status from physical back to virtual resources
Resource Mapping : Maintains mappings between virtual and physical resources
The syncer implementation varies significantly between architectures. In standalone mode, it manages resources directly instead of syncing to a host cluster.
Node Representation
vCluster handles nodes differently based on the architecture:
Fake Nodes (Shared)
Synced Nodes (Dedicated)
Private Nodes
Standalone Nodes
Virtual clusters show fake nodes that don’t correspond to real infrastructure: // pkg/controllers/resources/nodes/fake_syncer.go
// Creates pseudo nodes for the virtual cluster
FakeNodesVersion = "v1.19.1"
Characteristics:
Created automatically based on pod scheduling
Show aggregated resources from host cluster
Cannot be directly manipulated by users
No kubelet endpoint (unless proxy enabled)
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# fake-node-1 Ready <none> 10m v1.19.1
# fake-node-2 Ready <none> 10m v1.19.1
Real host nodes are synced to the virtual cluster: sync :
fromHost :
nodes :
enabled : true
selector :
labels :
tenant : my-team
Characteristics:
Real host nodes visible in virtual cluster
Can be labeled and tainted in virtual cluster
Scheduling decisions respect node selectors
Kubelet metrics available via proxy
External nodes join the virtual cluster directly: privateNodes :
enabled : true
sync :
fromHost :
nodes :
enabled : false # No host nodes
Characteristics:
Real nodes with full kubelet
Own CNI and CSI drivers
Complete node lifecycle management
No host cluster involvement
Nodes join the vCluster control plane directly: controlPlane :
standalone :
enabled : true
joinNode :
enabled : true
Characteristics:
First-class cluster nodes
Control plane can also be a node
Standard Kubernetes node management
No virtual/physical distinction
Resource Lifecycle
Understanding how resources flow through vCluster:
Resource Creation
User creates a pod in the virtual cluster: kubectl create deployment nginx --image=nginx
The API server validates, admits, and stores the deployment in the data store.
Controller Processing
The deployment controller (running in virtual cluster) creates a ReplicaSet, which creates a Pod.
Syncer Translation
The syncer watches the pod creation and translates it: # Virtual Cluster
apiVersion : v1
kind : Pod
metadata :
name : nginx-abc123
namespace : default
# Physical Cluster (if applicable)
apiVersion : v1
kind : Pod
metadata :
name : nginx-abc123-x-default-x-my-vcluster
namespace : vcluster-my-vcluster
labels :
vcluster.loft.sh/namespace : default
vcluster.loft.sh/managed-by : my-vcluster
Scheduling & Execution
Depending on architecture:
Shared/Dedicated : Host scheduler places pod on host nodes
Private/Standalone : Virtual scheduler (if enabled) or default scheduler places pod on virtual nodes
Status Synchronization
The syncer watches the physical pod status and updates the virtual pod: status :
phase : Running
podIP : 10.244.1.5
conditions :
- type : Ready
status : "True"
Namespace Behavior
Virtual clusters provide complete namespace isolation:
Virtual Cluster Perspective
Users see a normal Kubernetes cluster with full namespace control:
kubectl create namespace team-a
kubectl create namespace team-b
kubectl get namespaces
# NAME STATUS AGE
# default Active 10m
# kube-system Active 10m
# team-a Active 1m
# team-b Active 1m
Host Cluster Perspective (Shared/Dedicated)
Namespaces in the virtual cluster don’t create host namespaces by default. All resources go into the vCluster’s host namespace:
# On host cluster
kubectl get pods -n vcluster-my-vcluster
# NAME READY STATUS
# nginx-abc123-x-default-x-my-vcluster 1/1 Running
# app-xyz789-x-team-a-x-my-vcluster 1/1 Running
You can enable namespace syncing to create real host namespaces: sync :
toHost :
namespaces :
enabled : true
This is useful for integrating with host-level policies and tools.
DNS and Service Discovery
Each virtual cluster has its own DNS domain:
networking :
advanced :
clusterDomain : "cluster.local" # Default
Service DNS
Services are resolvable within the virtual cluster:
# Create service in default namespace
kubectl expose deployment nginx --port=80
# Accessible at:
http://nginx.default.svc.cluster.local
CoreDNS Configuration
vCluster deploys CoreDNS for virtual cluster DNS:
controlPlane :
coredns :
enabled : true
embedded : false # Run as separate deployment
Standard CoreDNS
Embedded CoreDNS (Pro)
controlPlane :
coredns :
enabled : true
deployment :
replicas : 1
resources :
requests :
cpu : 20m
memory : 64Mi
Networking Models
Networking varies by architecture:
Shared/Dedicated Nodes
Pod IPs : From host cluster’s pod CIDR
Service IPs : Virtual cluster’s service CIDR (default: 10.96.0.0/12)
Ingress : Synced to host cluster or use LoadBalancer
networking :
services :
cidr : "10.96.0.0/12" # Virtual service CIDR
Private/Standalone Nodes
Pod IPs : Own pod CIDR with own CNI
Service IPs : Own service CIDR
Ingress : Native to virtual cluster
networking :
podCIDR : "10.244.0.0/16" # Virtual pod CIDR
controlPlane :
service :
spec :
type : NodePort # Expose control plane
Storage Integration
Shared/Dedicated Modes
Storage classes are synced from the host cluster:
sync :
fromHost :
storageClasses :
enabled : auto # Enabled when virtual scheduler is off
PVCs in the virtual cluster create PVCs in the host cluster.
Private/Standalone Modes
Storage is fully independent:
deploy :
localPathProvisioner :
enabled : true # Provides local storage
Install any CSI driver directly in the virtual cluster.
High Availability
Run multiple control plane replicas for production:
controlPlane :
backingStore :
etcd :
embedded :
enabled : true
statefulSet :
highAvailability :
replicas : 3
leaseDuration : 60
renewDeadline : 40
retryPeriod : 15
Leader Election : Only one replica is active; others standby for failover.
Resource Requests
Adjust based on cluster size:
controlPlane :
statefulSet :
resources :
requests :
cpu : 200m
memory : 256Mi
limits :
memory : 4Gi
ephemeral-storage : 10Gi
Persistence
Use appropriate storage for the backing store:
controlPlane :
statefulSet :
persistence :
volumeClaim :
enabled : true
size : 5Gi
storageClass : "fast-ssd"
Limitations
Virtual clusters have some constraints compared to physical clusters:
Cannot modify host cluster resources : Virtual cluster users cannot access or modify resources outside their virtual cluster (unless explicitly granted permissions).
Kernel-level isolation : In shared/dedicated modes, workloads share the host kernel. Use private/standalone for stronger isolation.
Node-level features : Features requiring direct node access (e.g., DaemonSets on host nodes, privileged containers in shared mode) have limitations.
Next Steps
Shared Nodes Learn about the highest-density deployment mode.
Dedicated Nodes Explore compute isolation with labeled node pools.
Private Nodes Discover full CNI/CSI isolation for compliance.
Standalone Mode Run vCluster without a host cluster.