Skip to content

Getting Started with VCluster

VCluster creates lightweight, isolated virtual Kubernetes clusters that run as Docker containers. This guide walks you through creating your first VCluster, understanding when to use it, and exploring common use cases.

VCluster provides virtual Kubernetes clustersβ€”fully functional Kubernetes control planes that run inside Docker containers instead of requiring a host cluster. KSail uses the Vind Docker driver to run VCluster directly on Docker, making it perfect for development and CI/CD without the overhead of a full Kubernetes cluster.

  • Lightweight: Control plane runs in a single Docker container
  • Fast startup: Clusters ready in seconds
  • Isolated: Each VCluster is completely isolated
  • Cost-effective: Share Docker infrastructure across multiple clusters
  • CI/CD friendly: Perfect for ephemeral test environments
  • Multi-tenancy: Run multiple isolated clusters on one machine

VCluster excels at CI/CD pipelines (ephemeral per-PR clusters, parallel test execution on shared runners), local multi-environment development (isolated environments per feature branch or team member), and learning (safe, disposable Kubernetes sandboxes).

Consider other distributions for production workloads (Talos), cloud deployment (Talos with Hetzner Cloud or Omni), standard upstream Kubernetes testing (Vanilla), low-resource devices (K3s), or advanced CNI control (Vanilla or Talos).

Create a new VCluster configuration:

Terminal window
mkdir my-vcluster
cd my-vcluster
ksail cluster init \
--name dev-cluster \
--distribution VCluster \
--gitops-engine Flux

This generates:

  • ksail.yaml β€” KSail cluster configuration
  • vcluster.yaml β€” VCluster Helm values (native configuration)
  • k8s/kustomization.yaml β€” Directory for Kubernetes manifests
Terminal window
ksail cluster create

What happens:

  1. Docker containers are created for the VCluster control plane
  2. Optional worker nodes are started (if configured)
  3. GitOps engine (Flux or ArgoCD) is bootstrapped
  4. Kubeconfig is automatically configured

Expected output:

βœ“ Creating VCluster cluster 'dev-cluster'...
βœ“ Waiting for control plane to be ready...
βœ“ Cluster created successfully
βœ“ Kubeconfig updated: kubectl config use-context dev-cluster
Terminal window
# Check cluster info
ksail cluster info
# List running containers
docker ps | grep dev-cluster
# Test kubectl access
kubectl get nodes
kubectl get pods --all-namespaces
Terminal window
# Create a simple deployment
kubectl create deployment nginx --image=nginx:1.25
kubectl expose deployment nginx --port=80 --type=NodePort
# Check the service
kubectl get svc nginx
Terminal window
# Delete the cluster when done
ksail cluster delete
# Verify containers are removed
docker ps | grep dev-cluster

The generated ksail.yaml controls cluster behavior:

apiVersion: ksail.io/v1alpha1
kind: Cluster
spec:
cluster:
distribution: VCluster
gitOpsEngine: Flux # or ArgoCD

Customize VCluster behavior in vcluster.yaml:

# Control plane configuration
controlPlane:
distro:
k3s:
enabled: true
statefulSet:
resources:
limits:
memory: 2Gi
requests:
memory: 256Mi
# Sync options - what gets synced from virtual to host
sync:
persistentVolumes:
enabled: true
storageClasses:
enabled: true
nodes:
enabled: true
# Networking
networking:
replicateServices:
toHost:
- from: default
to: vcluster-default

See the vCluster configuration documentation for all options.

For real-world VCluster workflow examples β€” CI/CD pipelines, multi-environment development, feature branch testing, and microservices isolation β€” see Use Cases.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Your Host Machine β”‚
β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Docker Engine β”‚ β”‚
β”‚ β”‚ β”‚ β”‚
β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚
β”‚ β”‚ β”‚ VCluster Control Plane β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ (Docker Container) β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β€’ API Server β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β€’ Controller Manager β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β€’ Scheduler β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β€’ etcd β”‚ β”‚ β”‚
β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚
β”‚ β”‚ β”‚ β”‚
β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚
β”‚ β”‚ β”‚ Worker Nodes (Optional) β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ (Docker Containers) β”‚ β”‚ β”‚
β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚
β”‚ Your applications run in the VCluster β”‚
β”‚ and appear as normal pods from the β”‚
β”‚ virtual cluster's perspective β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The VCluster control plane runs in a Docker container and includes a Kubernetes API server, scheduler, controller manager, and etcd. Pods are scheduled to worker node containers (optional) or directly within the control plane container. When VCluster is used in a host-cluster deployment mode (outside KSail’s Docker/Vind setup), selected resources can be synced between the virtual cluster and its host cluster.

For a full comparison across all distributions and supported components, see the Support Matrix.

Symptom: ksail cluster create hangs or fails.

KSail automatically retries transient VCluster startup failures: up to 5 attempts (5-second delays) during create, and up to 3 attempts (3-minute timeout each) during the readiness check. Log messages like Retrying vCluster create (attempt 2/5)... are expected β€” wait for completion before investigating.

Persistent failures β€” common causes:

Terminal window
docker ps # Docker must be running
lsof -i :6443 # Check for port 6443 conflicts
docker info | grep -i memory # Ensure 2GB+ RAM allocated to Docker

If all retries fail, run ksail cluster delete && ksail cluster create.

Symptom: kubectl get nodes returns connection errors.

Terminal window
kubectl config current-context # Should show your VCluster name
kubectl config use-context dev-cluster # Switch if needed

If the context is missing, delete and recreate the cluster.

Symptom: Pods remain in Pending state.

Terminal window
kubectl describe pod <pod-name> # Check scheduling errors
kubectl top nodes # Check resource availability
kubectl get nodes # All nodes should show Ready

Increase Docker resources (4 GB RAM, 2 CPUs minimum) in Docker Desktop settings, and delete unused clusters with ksail cluster list / ksail cluster delete.

By default, VCluster runs workload pods inside the control plane container. For more realistic multi-node scenarios, enable worker nodes:

vcluster.yaml
nodes:
enabled: true
count: 3

This creates separate Docker containers acting as worker nodes.

VCluster can run different Kubernetes distributions internally:

vcluster.yaml
controlPlane:
distro:
k3s:
enabled: true # Default, lightweight
# OR
k8s:
enabled: true # Standard Kubernetes
# OR
k0s:
enabled: true # K0s distribution

Control resource consumption:

vcluster.yaml
controlPlane:
statefulSet:
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: "100m"
memory: 256Mi
  • Vanilla (Kind): Upstream Kubernetes for maximum compatibility
  • K3s (K3d): Lightweight K3s for resource-constrained environments and CI/CD
  • Talos: Immutable infrastructure for production