Skip to content

Getting Started with Vanilla (Kind)

Kind (Kubernetes in Docker) runs upstream Kubernetes clusters using Docker containers as nodes. It provides a lightweight, fast way to run standard Kubernetes locally without requiring a VM. This guide shows you how to use Vanilla (Kind) with KSail for local development, testing, and learning.

Vanilla is KSail’s name for the upstream Kubernetes distribution running via Kind. It provides:

  • Standard Kubernetes: Unmodified upstream K8s with full compatibility
  • Fast cluster creation: 30-60 seconds for a working cluster
  • Docker-only: Runs entirely in Docker containers (no VM overhead)
  • Multi-node support: Simulate production topologies locally
  • Native configuration: Uses standard kind.yaml files (no lock-in)

Vanilla (Kind) is ideal for:

  • Learning Kubernetes: Upstream K8s without distribution-specific modifications
  • Application development: Fast local iterative development with full K8s API compatibility
  • CI/CD testing: Ephemeral clusters for integration tests and validation
  • GitOps workflows: Test Flux/ArgoCD configurations locally before production
  • Multi-node testing: Simulate production topologies (control plane + workers)
  • Kubernetes contributors: Test upstream features and changes

Consider alternatives if you need:

  • Minimal resource usage: K3s is lighter (single-binary, fewer components)
  • Built-in LoadBalancer: K3s has ServiceLB; Vanilla requires Cloud Provider KIND
  • Virtual clusters: VCluster provides isolation without separate nodes
  • Production deployments: Talos offers immutable infrastructure and enhanced security
  • Embedded storage: K3s includes local-path-provisioner; Vanilla requires explicit CSI

Create your first Vanilla cluster in under 60 seconds.

  • Docker Desktop or Docker Engine installed and running
  • docker ps command works
Terminal window
ksail cluster init \
--name my-cluster \
--distribution Vanilla \
--control-planes 1 \
--workers 2

This creates:

  • ksail.yaml β€” KSail configuration
  • kind.yaml β€” Kind-specific cluster configuration
Terminal window
ksail cluster create

KSail will:

  1. Generate Kind configuration from ksail.yaml
  2. Create Docker containers as Kubernetes nodes
  3. Install Kubernetes components
  4. Configure kubectl context
  5. Install CNI (Cilium by default)

Expected output:

βœ“ Creating Vanilla cluster with Kind...
βœ“ Installing Cilium CNI...
βœ“ Cluster ready!
Cluster: my-cluster
Nodes: 3 (1 control plane, 2 workers)
Provider: Docker
Terminal window
# Check cluster info
ksail cluster info
# View nodes
kubectl get nodes
# Expected output:
# NAME STATUS ROLES AGE VERSION
# my-cluster-control-plane Ready control-plane 2m v1.31.0
# my-cluster-worker Ready <none> 2m v1.31.0
# my-cluster-worker2 Ready <none> 2m v1.31.0
Terminal window
# Create a deployment
kubectl create deployment nginx --image=nginx
# Expose as ClusterIP service
kubectl expose deployment nginx --port=80
# Verify deployment
kubectl get pods
Terminal window
# Delete cluster and all resources
ksail cluster delete

The ksail.yaml file controls your cluster:

# yaml-language-server: $schema=https://raw.githubusercontent.com/devantler-tech/ksail/main/schemas/ksail-config.schema.json
apiVersion: ksail.io/v1alpha1
kind: Cluster
spec:
cluster:
distribution: Vanilla
distributionConfig: kind.yaml

Customize Kind behavior in kind.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
# Port mapping for accessing services
extraPortMappings:
- containerPort: 80
hostPort: 8080
protocol: TCP
- role: worker
- role: worker
# Containerd configuration
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5000"]
endpoint = ["http://localhost:5000"]

Enable LoadBalancer services with Cloud Provider KIND:

ksail.yaml
apiVersion: ksail.io/v1alpha1
kind: Cluster
spec:
cluster:
distribution: Vanilla
loadBalancer: Enabled

When enabled, KSail installs Cloud Provider KIND, which:

  • Creates cpk-* Docker containers as LoadBalancer proxies
  • Assigns external IPs to LoadBalancer services
  • Automatically cleans up on cluster deletion

Example LoadBalancer service:

Terminal window
# Create LoadBalancer service
kubectl expose deployment nginx --type=LoadBalancer --port=80
# Get external IP
kubectl get svc nginx
# Access from host
curl http://<EXTERNAL-IP>

Configure registry mirrors to avoid rate limits. KSail enables docker.io, ghcr.io, quay.io, and registry.k8s.io mirrors by default. Override with --mirror-registry flags:

Terminal window
ksail cluster init --name my-cluster --distribution Vanilla \
--mirror-registry 'docker.io=https://registry-1.docker.io' \
--mirror-registry 'ghcr.io=https://ghcr.io' \
--mirror-registry 'quay.io=https://quay.io' \
--mirror-registry 'registry.k8s.io=https://registry.k8s.io'

KSail injects containerd registry configuration into all Kind nodes.

Scenario: Develop a microservices application with hot-reload workflows.

Terminal window
# Initialize development cluster
ksail cluster init --name dev --distribution Vanilla --workers 1
# Create cluster
ksail cluster create
# Deploy application
kubectl apply -f k8s/
# Watch for changes and redeploy
kubectl rollout restart deployment/my-app

Why Vanilla: Fast cluster creation, standard K8s API, compatible with production manifests.

Scenario: Run integration tests in ephemeral Kubernetes clusters.

Terminal window
# CI pipeline script
ksail cluster init --name ci-test --distribution Vanilla
ksail cluster create
# Run tests
kubectl apply -f test-manifests/
./run-integration-tests.sh
# Cleanup
ksail cluster delete

Why Vanilla: Predictable environment, fast creation/deletion, standard Kubernetes.

Scenario: Test GitOps configurations locally before production deployment.

Terminal window
# Initialize with GitOps
ksail cluster init \
--name gitops-test \
--distribution Vanilla \
--gitops-engine Flux
# Create and bootstrap Flux
ksail cluster create
ksail workload apply
# Test manifests reconciliation
kubectl get kustomizations -A

Why Vanilla: Full compatibility with production GitOps workflows, standard K8s APIs.

Scenario: Test pod scheduling, node affinity, and multi-node features.

Terminal window
# Create 3-node cluster
ksail cluster init \
--name topology-test \
--distribution Vanilla \
--control-planes 1 \
--workers 3
ksail cluster create
# Test node affinity
kubectl label nodes topology-test-worker zone=us-east-1a
kubectl label nodes topology-test-worker2 zone=us-east-1b
# Deploy with affinity rules
kubectl apply -f deployment-with-affinity.yaml

Why Vanilla: Simulates multi-node production environments, supports node labels and taints.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Docker Host β”‚
β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Control Plane Node (Docker Container) β”‚ β”‚
β”‚ β”‚ β€’ kube-apiserver β”‚ β”‚
β”‚ β”‚ β€’ kube-controller-manager β”‚ β”‚
β”‚ β”‚ β€’ kube-scheduler β”‚ β”‚
β”‚ β”‚ β€’ etcd β”‚ β”‚
β”‚ β”‚ β€’ kubelet β”‚ β”‚
β”‚ β”‚ β€’ containerd (container runtime) β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Worker Node 1 β”‚ β”‚ Worker Node 2 β”‚ β”‚
β”‚ β”‚ β€’ kubelet β”‚ β”‚ β€’ kubelet β”‚ β”‚
β”‚ β”‚ β€’ containerd β”‚ β”‚ β€’ containerd β”‚ β”‚
β”‚ β”‚ β€’ kube-proxy β”‚ β”‚ β€’ kube-proxy β”‚ β”‚
β”‚ β”‚ β€’ CNI (Cilium) β”‚ β”‚ β€’ CNI (Cilium) β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Optional: Cloud Provider KIND (cpk- containers) β”‚ β”‚
β”‚ β”‚ β€’ LoadBalancer proxy containers β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  1. Node Creation: Kind creates Docker containers running systemd
  2. Kubernetes Install: Standard upstream K8s components installed via kubeadm
  3. Networking: Bridge network connects all nodes
  4. Storage: Each node has ephemeral storage (lost on cluster delete)
  5. LoadBalancer: Optional Cloud Provider KIND creates proxy containers
  • Provisioner: pkg/svc/provisioner/cluster/kind/ wraps Kind SDK
  • Infrastructure: Docker provider manages container lifecycle (start/stop)
  • Configuration: Generates kind.yaml from ksail.yaml declaratively
  • Installers: CNI (Cilium), CSI (local-path-storage), metrics-server, Cloud Provider KIND
FeatureVanilla (Kind)K3s (K3d)TalosVCluster
Kubernetes TypeUpstreamLightweightUpstreamVirtual
Resource UsageMediumLowMediumVery Low
Startup Time30-60s15-30s60-90s10-20s
LoadBalancerCloud Provider KINDServiceLB (built-in)MetalLB/hcloud-ccmHost cluster LB
Storagelocal-path (optional)local-path (built-in)Hetzner CSIHost cluster storage
Production Ready❌ Local only❌ Local onlyβœ… Docker + Cloud❌ Virtual only
Multi-Nodeβœ… Yesβœ… Yesβœ… Yes❌ Single pod
Shell Accessβœ… Docker execβœ… Docker exec❌ No shellβœ… kubectl exec
GitOps Supportβœ… Fullβœ… Fullβœ… Fullβœ… Full
Best ForLearning, TestingQuick dev, CI/CDProduction, SecurityMulti-tenancy, CI
API Compatibility100% Upstream99% Compatible100% Upstream100% Upstream
Node IsolationFullFullFullVirtual
Cluster LifecycleCreate/DeleteCreate/DeleteCreate/Delete/UpdateCreate/Delete

Symptom: ksail cluster create fails with Docker errors.

Solutions:

Terminal window
# Check Docker is running
docker ps
# Check available disk space
df -h
# Clean up unused containers, networks, and dangling images (safer)
docker system prune
# OPTIONAL (more aggressive):
# This will remove ALL unused images (not just dangling ones) and may affect other projects.
# Use only if the above prune is not sufficient and you understand the impact.
# docker system prune -a
# Retry cluster creation
ksail cluster create

Symptom: kubectl get nodes shows NotReady status.

Solutions:

Terminal window
# Check node conditions
kubectl describe node <node-name>
# Check CNI pods
kubectl get pods -n kube-system | grep cilium
# Restart CNI if needed
kubectl rollout restart daemonset/cilium -n kube-system

Symptom: LoadBalancer service never gets EXTERNAL-IP.

Solutions:

Terminal window
# Check Cloud Provider KIND is enabled
kubectl get pods -A | grep cloud-provider-kind
# Verify LoadBalancer enabled in ksail.yaml
cat ksail.yaml | grep -A2 loadBalancer
# Enable LoadBalancer if missing
# Edit ksail.yaml and set spec.cluster.loadBalancer: Enabled
# Update cluster
ksail cluster update

Symptom: address already in use error during cluster creation.

Solutions:

Terminal window
# Check existing Kind clusters
kind get clusters
# Delete conflicting cluster
kind delete cluster --name <conflicting-cluster>
# Or use different name
ksail cluster init --name my-cluster-2

Symptom: Cluster creation fails with resource errors.

Solutions:

Terminal window
# Use fewer nodes (single-node cluster)
ksail cluster init --name my-cluster --distribution Vanilla --control-planes 1 --workers 0
# Increase Docker resources (Docker Desktop)
# Settings β†’ Resources β†’ Adjust CPU/Memory/Disk
# Clean up Docker
docker system prune -a --volumes

Create production-like topologies:

ksail.yaml
apiVersion: ksail.io/v1alpha1
kind: Cluster
spec:
cluster:
distribution: Vanilla
Terminal window
ksail cluster init \
--name multi-node \
--distribution Vanilla \
--control-planes 3 \
--workers 5

Kind determines the Kubernetes version via the node image tag. To use a specific version, configure the node image in your kind.yaml:

kind.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.31.0
- role: worker
image: kindest/node:v1.31.0

Supported versions: Check Kind release notes for image availability.

Override default CNI:

Terminal window
ksail cluster init --name my-cluster --distribution Vanilla --cni Calico

Available CNI options: Default, Cilium, Calico.

Expose services on host ports:

kind.yaml
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30080 # NodePort service
hostPort: 8080
protocol: TCP

Access services at http://localhost:8080.

Enable persistent storage:

Terminal window
ksail cluster init --name my-cluster --distribution Vanilla --csi Enabled

Create PersistentVolumeClaims:

Terminal window
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
Terminal window
# Stop cluster (preserves state)
ksail cluster stop
# Start stopped cluster
ksail cluster start
# Update cluster (may require recreation)
ksail cluster update
# List all clusters
ksail cluster list
# Get cluster info
ksail cluster info

KSail generates standard kind.yaml files:

Terminal window
# Use Kind CLI directly
kind create cluster --config kind.yaml
# Export kubeconfig for an existing Kind cluster
kind export kubeconfig --name my-cluster

No vendor lock-inβ€”configurations are portable, and KSail can interact with any Kind cluster accessible via your kubeconfig.

  • K3s (K3d): Lightweight K3s for resource-constrained environments and CI/CD
  • Talos: Immutable infrastructure for production
  • VCluster: Virtual clusters for multi-tenancy