Skip to content

VCluster

VCluster creates lightweight, isolated virtual Kubernetes clusters that run as Docker containers. This guide walks you through creating your first VCluster, understanding when to use it, and exploring common use cases.

VCluster provides virtual Kubernetes clusters—fully functional Kubernetes control planes that ordinarily run as pods inside a host Kubernetes cluster. KSail uses the Vind Docker driver to run VCluster directly on Docker instead, without requiring a host cluster. Each VCluster is isolated, starts in seconds, and shares Docker infrastructure—making it ideal for CI/CD and local multi-environment development.

VCluster excels at CI/CD pipelines (ephemeral per-PR clusters, parallel test execution on shared runners), local multi-environment development (isolated environments per feature branch or team member), and learning (safe, disposable Kubernetes sandboxes).

Consider other distributions for production workloads (Talos), cloud deployment (Talos with Hetzner Cloud or Omni), standard upstream Kubernetes testing (Vanilla), low-resource devices (K3s), or advanced CNI control (Vanilla or Talos).

Create a new VCluster configuration:

Terminal window
mkdir my-vcluster
cd my-vcluster
ksail cluster init \
--name dev-cluster \
--distribution VCluster \
--gitops-engine Flux

This generates:

  • ksail.yaml — KSail cluster configuration
  • vcluster.yaml — VCluster Helm values (native configuration)
  • k8s/kustomization.yaml — Directory for Kubernetes manifests
Terminal window
ksail cluster create

KSail creates Docker containers for the control plane, bootstraps the GitOps engine, and configures kubectl automatically.

Terminal window
# Check cluster info
ksail cluster info
# List running containers
docker ps | grep dev-cluster
# Test kubectl access
kubectl get nodes
kubectl get pods --all-namespaces
Terminal window
# Create a simple deployment
kubectl create deployment nginx --image=nginx:1.25
kubectl expose deployment nginx --port=80 --type=NodePort
# Check the service
kubectl get svc nginx
Terminal window
# Delete the cluster when done
ksail cluster delete
# Verify containers are removed
docker ps | grep dev-cluster

For general KSail YAML options, see the Configuration Reference. Customize VCluster-specific behavior in vcluster.yaml:

# Control plane configuration
controlPlane:
distro:
k3s:
enabled: true
statefulSet:
resources:
limits:
memory: 2Gi
requests:
memory: 256Mi
# Sync options - what gets synced from virtual to host
sync:
persistentVolumes:
enabled: true
storageClasses:
enabled: true
nodes:
enabled: true
# Networking
networking:
replicateServices:
toHost:
- from: default
to: vcluster-default

See the vCluster configuration documentation for all options.

For real-world VCluster workflow examples — CI/CD pipelines, multi-environment development, feature branch testing, and microservices isolation — see Use Cases.

VCluster runs a full Kubernetes control plane (API server, scheduler, controller manager, etcd) as pods inside a host Kubernetes cluster. In KSail, the Vind driver runs these components as Docker containers instead. Pods are scheduled onto cluster nodes (the control-plane node or optional worker nodes), each of which runs as a Docker container. When using host-cluster deployment mode, selected resources can be synced between the virtual and host cluster.

For a full comparison across all distributions and supported components, see the Support Matrix.

Symptom: ksail cluster create hangs or fails.

KSail automatically retries transient VCluster startup failures: up to 5 attempts (5-second delays) during create, and up to 3 attempts (3-minute timeout each) during the readiness check. Log messages like Retrying vCluster create (attempt 2/5)... are expected — wait for completion before investigating.

Persistent failures — common causes:

Terminal window
docker ps # Docker must be running
lsof -i :6443 # Check for port 6443 conflicts
docker info | grep -i memory # Ensure 2GB+ RAM allocated to Docker

If all retries fail, run ksail cluster delete && ksail cluster create.

Symptom: kubectl get nodes returns connection errors.

Terminal window
kubectl config current-context # Should show your VCluster name
kubectl config use-context dev-cluster # Switch if needed

If the context is missing, delete and recreate the cluster.

Symptom: Pods remain in Pending state.

Terminal window
kubectl describe pod <pod-name> # Check scheduling errors
kubectl top nodes # Check resource availability
kubectl get nodes # All nodes should show Ready

Increase Docker resources (4 GB RAM, 2 CPUs minimum) in Docker Desktop settings, and delete unused clusters with ksail cluster list / ksail cluster delete.

By default, workload pods are scheduled onto the control-plane node, which runs as a Docker container. For more realistic multi-node scenarios, enable worker nodes:

vcluster.yaml
nodes:
enabled: true
count: 3

This creates separate Docker containers acting as worker nodes.

VCluster supports k3s (default), k8s, and k0s as internal distributions. Set controlPlane.distro.<name>.enabled: true in vcluster.yaml.

Control resource consumption:

vcluster.yaml
controlPlane:
statefulSet:
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: "100m"
memory: 256Mi