Skip to content

Troubleshooting

Verify Docker is running with docker ps. If not running, start Docker Desktop (macOS) or sudo systemctl start docker (Linux).

Common causes: insufficient resources, firewall blocking Docker network access, or leftover cluster state.

Terminal window
ksail cluster list
ksail cluster delete --name <cluster-name>
docker system prune -f

If you see Error: Port 5000 is already allocated, use a different port (e.g., --local-registry localhost:5050) or kill the conflicting process:

macOS/Linux:

Terminal window
lsof -ti:5000 | xargs kill -9

Windows (PowerShell):

Terminal window
netstat -ano | findstr :5000
taskkill /PID <id> /F

KSail automatically retries transient registry errors (HTTP 429, 5xx, timeouts) during cluster create/update and ksail workload push. For authentication errors, verify connectivity and credentials:

Terminal window
curl -I https://registry.example.com/v2/
docker ps | grep registry
ksail cluster init --local-registry '${REG_USER}:${REG_TOKEN}@registry.example.com/my-org/my-repo'
  • registry requires authentication — missing or incorrect --local-registry credentials
  • registry access denied — credentials lack write permission
  • registry is unreachable — DNS failure, firewall, or registry down

Registry containers have a built-in health check (polls /v2/ every 10 s, marks unhealthy after 3 consecutive failures). To diagnose mirror errors:

Terminal window
docker ps --filter label=io.ksail.registry --format 'table {{.Names}}\t{{.Status}}'
docker inspect --format '{{json .State.Health}}' <container-name>

Flux CRDs can take 7–10 minutes on resource-constrained systems; KSail allows up to 12 minutes. If timeouts persist, check resources (docker stats) and ensure 4 GB+ RAM.

Terminal window
ksail workload get pods -n flux-system
kubectl get crd <crd-name> -o jsonpath='{.status.conditions[?(@.type=="Established")].status}'

Flux/ArgoCD CrashLoopBackOff After Component Installation

Section titled “Flux/ArgoCD CrashLoopBackOff After Component Installation”

Infrastructure components (MetalLB, Kyverno, cert-manager) can temporarily disrupt API server connectivity while registering webhooks/CRDs, causing CrashLoopBackOff with dial tcp 10.96.0.1:443: i/o timeout errors. CNI components (e.g. Cilium) can also cause this if their eBPF dataplane hasn’t finished programming pod-to-service routing when GitOps engines start. KSail performs a three-step cluster stability check before installing GitOps engines: (1) 5 consecutive successful API server health checks, (2) all kube-system DaemonSets ready, and (3) a short-lived busybox pod confirms TCP connectivity to the API server ClusterIP. If you see cluster not stable after infrastructure installation or in-cluster API connectivity check failed, check resources and optionally recreate with fewer components:

Terminal window
ksail workload get nodes
ksail workload get pods -A | grep -v Running
ksail cluster delete && ksail cluster create

If changes don’t appear after ksail workload reconcile, check status and logs:

Terminal window
ksail workload get pods -n flux-system # Flux
ksail workload get pods -n argocd # ArgoCD
ksail workload logs -n flux-system deployment/source-controller
ksail workload reconcile --timeout=5m

KSail retries transient Helm registry errors automatically (5 attempts, exponential backoff). For persistent failures, check resources with docker stats and curl -I https://ghcr.io, then recreate: ksail cluster delete && ksail cluster create. On resource-constrained systems, increase Docker limits, skip optional components, or use K3s.

Validate against the schema or re-initialize: ksail cluster init --name my-cluster --distribution Vanilla

Ensure environment variables are set before running KSail. Verify with echo $MY_TOKEN before using ${MY_TOKEN} in configuration.

If kubectl get svc shows <pending> for EXTERNAL-IP, verify LoadBalancer is enabled in ksail.yaml (reinitialize with --load-balancer Enabled if not) and check the controller for your distribution:

  • Vanilla: docker ps | grep ksail-cloud-provider-kind
  • Talos: kubectl get pods -n metallb-system
  • Hetzner: kubectl get pods -n kube-system | grep hcloud

If connection fails despite an external IP, ensure the application listens on 0.0.0.0 (not 127.0.0.1). Debug with kubectl logs -l app=my-app, kubectl describe svc my-app, and kubectl exec -it <pod-name> -- netstat -tlnp to check listening ports.

If new LoadBalancer services remain pending after several successful allocations, the MetalLB IP pool is exhausted. See the LoadBalancer Configuration Guide to expand the address range.

If pods are stuck in ContainerCreating with CNI errors, check CNI pods with ksail workload get pods -n kube-system -l k8s-app=cilium (or calico-node). If failed, recreate: ksail cluster init --cni Cilium && ksail cluster create

KSail automatically retries transient VCluster startup failures (up to 5 attempts, 5-second delay), including exit status 22/EINVAL, D-Bus errors, network transients, and GHCR pull failures. Retrying vCluster create (attempt 2/5)... messages are expected — no action required.

If all retries fail, check Docker resource limits and D-Bus availability. See the VCluster Getting Started guide for details.

Wait a few seconds if kubectl get nodes returns connection errors immediately after creation — VCluster control planes need time to start. Verify the active context with kubectl config current-context and ksail workload get nodes.

  • HCLOUD_TOKEN not working: Verify read/write permissions (Hetzner Cloud Console → Security → API Tokens). Test with hcloud server list if installed.
  • Talos ISO not found: The default ISO ID may be outdated. Find the correct ID in Hetzner Cloud Console under Images → ISOs.

Check GitHub Issues and Discussions. When reporting issues, include KSail version, OS, Docker version, ksail.yaml, error messages, and reproduction steps.