Skip to content

Getting Started with VCluster

VCluster creates lightweight, isolated virtual Kubernetes clusters that run as Docker containers. This guide walks you through creating your first VCluster, understanding when to use it, and exploring common use cases.

VCluster provides virtual Kubernetes clustersβ€”fully functional Kubernetes control planes that run inside Docker containers instead of requiring a host cluster. KSail uses the Vind Docker driver to run VCluster directly on Docker, making it perfect for development and CI/CD without the overhead of a full Kubernetes cluster.

  • Lightweight: Control plane runs in a single Docker container
  • Fast startup: Clusters ready in seconds
  • Isolated: Each VCluster is completely isolated
  • Cost-effective: Share Docker infrastructure across multiple clusters
  • CI/CD friendly: Perfect for ephemeral test environments
  • Multi-tenancy: Run multiple isolated clusters on one machine

VCluster excels in these scenarios:

CI/CD Pipelines

  • Ephemeral test clusters per PR or test suite
  • Fast creation/deletion without infrastructure overhead
  • Complete isolation between test runs
  • Parallel test execution on shared runners

Development Workflows

  • Quick feature branch testing
  • Local integration testing
  • Microservices development with isolated environments
  • Rapid cluster recreation during development

Multi-Tenancy

  • Multiple isolated environments on one machine
  • Team collaboration without cluster conflicts
  • Demo environments for each customer or project
  • Sandbox environments for experimentation

Learning and Experimentation

  • Safe environment for Kubernetes learning
  • Test manifests without affecting other projects
  • Experiment with cluster configurations
  • Quick reset to clean state
  • Production workloads β†’ Use Talos (security-focused) or K3s (resource-efficient)
  • Bare metal or cloud deployment β†’ Use Talos with Hetzner provider
  • Upstream Kubernetes testing β†’ Use Vanilla (Kind) for standard Kubernetes
  • Low-resource devices β†’ Use K3s for minimal memory footprint
  • Advanced networking features β†’ Use Vanilla or Talos with full CNI control

Create a new VCluster configuration:

Terminal window
mkdir my-vcluster
cd my-vcluster
ksail cluster init \
--name dev-cluster \
--distribution VCluster \
--gitops-engine Flux

This generates:

  • ksail.yaml β€” KSail cluster configuration
  • vcluster.yaml β€” VCluster Helm values (native configuration)
  • k8s/kustomization.yaml β€” Directory for Kubernetes manifests
Terminal window
ksail cluster create

What happens:

  1. Docker containers are created for the VCluster control plane
  2. Optional worker nodes are started (if configured)
  3. GitOps engine (Flux or ArgoCD) is bootstrapped
  4. Kubeconfig is automatically configured

Expected output:

βœ“ Creating VCluster cluster 'dev-cluster'...
βœ“ Waiting for control plane to be ready...
βœ“ Cluster created successfully
βœ“ Kubeconfig updated: kubectl config use-context dev-cluster
Terminal window
# Check cluster info
ksail cluster info
# List running containers
docker ps | grep dev-cluster
# Test kubectl access
kubectl get nodes
kubectl get pods --all-namespaces
Terminal window
# Create a simple deployment
kubectl create deployment nginx --image=nginx:1.25
kubectl expose deployment nginx --port=80 --type=NodePort
# Check the service
kubectl get svc nginx
Terminal window
# Delete the cluster when done
ksail cluster delete
# Verify containers are removed
docker ps | grep dev-cluster

The generated ksail.yaml controls cluster behavior:

apiVersion: ksail.io/v1alpha1
kind: Cluster
spec:
cluster:
distribution: VCluster
gitOpsEngine: Flux # or ArgoCD

Customize VCluster behavior in vcluster.yaml:

# Control plane configuration
controlPlane:
distro:
k3s:
enabled: true
statefulSet:
resources:
limits:
memory: 2Gi
requests:
memory: 256Mi
# Sync options - what gets synced from virtual to host
sync:
persistentVolumes:
enabled: true
storageClasses:
enabled: true
nodes:
enabled: true
# Networking
networking:
replicateServices:
toHost:
- from: default
to: vcluster-default

See the vCluster configuration documentation for all options.

Create isolated clusters for each test run:

Terminal window
# In CI pipeline
BRANCH=$(git rev-parse --short HEAD)
ksail cluster init --name "test-${BRANCH}" --distribution VCluster
ksail cluster create
# Run tests
kubectl apply -f test-manifests/
pytest integration/
# Cleanup
ksail cluster delete

Benefits:

  • Each test run gets a fresh, isolated cluster
  • No cross-contamination between runs
  • Fast creation/deletion (seconds vs minutes)
  • Parallel test execution on same runner

Maintain separate environments on one machine:

Terminal window
# Create dev environment
ksail cluster init --name dev --distribution VCluster
ksail cluster create
# Create staging environment
ksail cluster init --name staging --distribution VCluster
ksail cluster create
# Create production-like environment
ksail cluster init --name prod-sim --distribution VCluster
ksail cluster create
# Switch between environments
kubectl config use-context dev
kubectl config use-context staging
kubectl config use-context prod-sim

Each team member can have isolated services:

Terminal window
# Developer 1
ksail cluster init --name alice-services --distribution VCluster
ksail cluster create
kubectl apply -f alice-manifests/
# Developer 2
ksail cluster init --name bob-services --distribution VCluster
ksail cluster create
kubectl apply -f bob-manifests/
# No conflicts - completely isolated!

Test features in isolation before merging:

Terminal window
# Create cluster per feature branch
git checkout feature/new-api
ksail cluster init --name feature-new-api --distribution VCluster
ksail cluster create
kubectl apply -f manifests/
# Test the feature
curl http://localhost:8080/api/v2
# When done, delete and switch branches
ksail cluster delete
git checkout feature/ui-updates
ksail cluster init --name feature-ui --distribution VCluster
ksail cluster create
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Your Host Machine β”‚
β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Docker Engine β”‚ β”‚
β”‚ β”‚ β”‚ β”‚
β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚
β”‚ β”‚ β”‚ VCluster Control Plane β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ (Docker Container) β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β€’ API Server β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β€’ Controller Manager β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β€’ Scheduler β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β€’ etcd β”‚ β”‚ β”‚
β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚
β”‚ β”‚ β”‚ β”‚
β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚
β”‚ β”‚ β”‚ Worker Nodes (Optional) β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ (Docker Containers) β”‚ β”‚ β”‚
β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚
β”‚ Your applications run in the VCluster β”‚
β”‚ and appear as normal pods from the β”‚
β”‚ virtual cluster's perspective β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  1. Virtual Control Plane: Runs in a Docker container with Kubernetes API server, scheduler, controller manager, and etcd
  2. Networking: Uses Docker networking for pod-to-pod communication
  3. Storage: Volumes are managed within the virtual cluster
  4. Syncing: Selected resources (pods, services, configmaps) are synced to host cluster if needed
FeatureVClusterVanilla (Kind)K3sTalos
Startup Time~10s~30s~20s~60s
Resource UsageLowMediumLowMedium
Multi-Tenancyβœ…βŒβŒβŒ
Production ReadyNoNoYesYes
Security IsolationHighMediumMediumHigh
Worker NodesOptionalRequiredReq’dReq’d
GitOps Supportβœ…βœ…βœ…βœ…
Best ForCI/CDTestingProdProd
Upstream KubernetesNo (K3s)YesNoYes

Symptom: ksail cluster create hangs or fails

Automatic retry (create phase): KSail automatically retries transient startup failures up to 3 times (with 5-second delays between attempts). If you see log messages like Retrying vCluster create (attempt 2/3)..., this is expected β€” KSail is recovering from a transient Docker or D-Bus error without any action needed from you.

Automatic retry (connect phase): After the cluster is created, KSail also retries the readiness check up to 3 times. Each attempt allows up to 3 minutes for the cluster to become ready, giving an effective timeout of ~9 minutes. If you see log messages like Retrying vCluster connect (attempt 2/3)..., this is expected behavior on slower machines or CI runners β€” KSail will keep waiting without any action needed from you.

Common causes for persistent failures:

  1. Docker not running

    Terminal window
    docker ps # Should list containers
  2. Port conflicts

    Terminal window
    # Check if port 6443 is in use
    lsof -i :6443
  3. Insufficient resources

    Terminal window
    # Ensure Docker has enough memory (2GB+ recommended)
    docker info | grep -i memory

Fix (if all retry attempts fail):

Terminal window
# Delete and recreate
ksail cluster delete
ksail cluster create

Symptom: ksail cluster create fails with exit status 22 (EINVAL) on CI runners.

KSail automatically retries transient VCluster startup failures with up to 3 attempts and a 5-second delay between attempts, cleaning up partial state between retries. If the issue persists after automatic retries, check Docker resource limits and D-Bus availability on the runner.

Symptom: kubectl get nodes returns connection errors

Fix:

Terminal window
# Verify kubeconfig context
kubectl config current-context
# Should show your VCluster name
# If not, switch contexts:
kubectl config use-context dev-cluster
# Or regenerate kubeconfig
ksail cluster delete
ksail cluster create

Symptom: Pods remain in Pending state

Diagnosis:

Terminal window
kubectl describe pod <pod-name>
# Look for scheduling errors

Common fixes:

  1. Check resources

    Terminal window
    kubectl top nodes
  2. Verify node readiness

    Terminal window
    kubectl get nodes
    # All nodes should show Ready

Symptom: Operations are slower than expected

Possible causes:

  • Docker resource limits too low
  • Too many clusters running simultaneously
  • System under heavy load

Optimization:

Terminal window
# Increase Docker resources in Docker Desktop settings
# Recommended: 4GB RAM, 2 CPUs minimum
# Delete unused clusters
ksail cluster list
ksail cluster delete unused-cluster

By default, VCluster runs workload pods inside the control plane container. For more realistic multi-node scenarios, enable worker nodes:

vcluster.yaml
nodes:
enabled: true
count: 3

This creates separate Docker containers acting as worker nodes.

VCluster can run different Kubernetes distributions internally:

vcluster.yaml
controlPlane:
distro:
k3s:
enabled: true # Default, lightweight
# OR
k8s:
enabled: true # Standard Kubernetes
# OR
k0s:
enabled: true # K0s distribution

Control resource consumption:

vcluster.yaml
controlPlane:
statefulSet:
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: "100m"
memory: 256Mi
  • Vanilla (Kind): Upstream Kubernetes for maximum compatibility
  • K3s (K3d): Lightweight K3s for resource-constrained environments and CI/CD
  • Talos: Immutable infrastructure for production