Skip to content

LoadBalancer Configuration

LoadBalancer services expose applications to external traffic. KSail supports LoadBalancer across all distributions with distribution-specific implementations for each platform.

LoadBalancer support varies by Kubernetes distribution and infrastructure provider:

DistributionProviderImplementationDefault BehaviorConfiguration Required
Vanilla (Kind)DockerCloud Provider KINDDisabledYes
K3s (K3d)DockerServiceLB (Klipper)EnabledNo
TalosDockerMetalLBDisabledYes
TalosHetznerhcloud-cloud-controller-managerEnabledNo
VCluster (Vind)DockerDelegated to host clusterN/AN/A

K3s and Talos × Hetzner enable LoadBalancer by default; Vanilla and Talos × Docker require explicit enablement. VCluster delegates LoadBalancer to the host cluster—spec.cluster.loadBalancer has no effect and never triggers a cluster update. Use the --load-balancer flag or spec.cluster.loadBalancer in ksail.yaml with values Default, Enabled, or Disabled.

All platforms accept the same LoadBalancer service manifest:

apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: my-app

Implementation: Cloud Provider KIND

Vanilla clusters use the Cloud Provider KIND controller, which runs as an external Docker container and allocates LoadBalancer IPs from the Docker bridge network.

CLI:

Terminal window
ksail cluster init \
--name my-cluster \
--distribution Vanilla \
--load-balancer Enabled

ksail.yaml:

apiVersion: ksail.io/v1alpha1
kind: Cluster
spec:
cluster:
distribution: Vanilla
loadBalancer: Enabled
graph TB
    subgraph "Docker Host"
        CPK["ksail-cloud-provider-kind container"]
        SOCK["/var/run/docker.sock"]
        NET["kind Docker network"]

        subgraph "Kind Cluster"
            CP["Control Plane"]
            W1["Worker 1"]
        end

        subgraph "Per-Service Containers"
            SVC1["cpk-default-my-app"]
            SVC2["cpk-default-nginx-lb"]
        end

        CPK -->|"watches LoadBalancer services"| CP
        CPK -->|"creates/removes"| SVC1
        CPK -->|"creates/removes"| SVC2
        CPK -.->|"mounts"| SOCK
        SVC1 -.->|"routes traffic"| W1
        SVC2 -.->|"routes traffic"| W1
        NET -.->|"IP allocation"| SVC1
        NET -.->|"IP allocation"| SVC2
    end

    USER["Host / curl"]
    USER -->|"http://172.18.0.x"| SVC1

Cloud Provider KIND runs as an external Docker container named ksail-cloud-provider-kind on the kind Docker network. It mounts the Docker socket (/var/run/docker.sock) to manage container lifecycles:

  1. Controller container — KSail creates a single ksail-cloud-provider-kind container with restart policy unless-stopped. This container watches all Kind clusters for type: LoadBalancer services.
  2. Per-service containers — For each LoadBalancer service, Cloud Provider KIND creates a dedicated container prefixed cpk- (e.g., cpk-default-my-app). These containers handle traffic routing from an external IP to the service’s pods.
  3. IP allocation — External IPs are allocated from the kind Docker bridge network subnet (typically 172.18.0.0/16), making them accessible from your host machine.
  4. Cleanup — When you run ksail cluster delete, KSail stops and removes the ksail-cloud-provider-kind container and all cpk-* containers.
Terminal window
# Verify the controller is running
docker ps --filter name=ksail-cloud-provider-kind
# CONTAINER ID IMAGE STATUS
# abc123 registry.k8s.io/cloud-provider-kind/... Up 5 minutes
# Check service external IP
kubectl get svc my-app
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# my-app LoadBalancer 10.96.1.10 172.18.0.100 80:30123/TCP 10s
# Test connectivity from host
curl http://172.18.0.100

Implementation: ServiceLB (Klipper LoadBalancer)

K3s includes ServiceLB by default, assigning the cluster node’s IP as the external IP and forwarding traffic via iptables. No configuration needed:

Terminal window
ksail cluster init --name my-cluster --distribution K3s
ksail cluster create
Terminal window
kubectl get svc my-app
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# my-app LoadBalancer 10.43.123.45 192.168.1.100 80:30456/TCP 5s

To disable LoadBalancer support:

CLI:

Terminal window
ksail cluster init \
--name my-cluster \
--distribution K3s \
--load-balancer Disabled

ksail.yaml:

spec:
cluster:
distribution: K3s
loadBalancer: Disabled

Implementation: MetalLB

Talos on Docker uses MetalLB to provide LoadBalancer services. MetalLB operates in Layer 2 mode and allocates IPs from a pre-configured pool.

CLI:

Terminal window
ksail cluster init \
--name my-cluster \
--distribution Talos \
--load-balancer Enabled

ksail.yaml:

apiVersion: ksail.io/v1alpha1
kind: Cluster
spec:
cluster:
distribution: Talos
provider: Docker
loadBalancer: Enabled

KSail configures MetalLB with a default IP pool of 172.18.255.200–172.18.255.250 in Layer 2 (ARP/NDP) mode on the Docker bridge network, chosen to avoid conflicts with typical Docker allocations.

KSail installs MetalLB via Helm, configures an IPAddressPool and L2Advertisement automatically, and MetalLB assigns IPs from the pool via ARP on the Docker network.

Terminal window
kubectl get svc my-app
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# my-app LoadBalancer 10.96.123.200 172.18.255.200 80:31234/TCP 8s

To use a custom IP range, you’ll need to create custom MetalLB resources after cluster creation:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: custom-pool
namespace: metallb-system
spec:
addresses:
- 172.18.100.1-172.18.100.254
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: custom-l2
namespace: metallb-system
spec:
ipAddressPools:
- custom-pool

Implementation: hcloud-cloud-controller-manager

Talos on Hetzner Cloud uses the Hetzner Cloud Controller Manager to provision real cloud load balancers.

LoadBalancer is enabled by default for Talos × Hetzner clusters. KSail automatically installs hcloud-ccm when loadBalancer is Default or Enabled.

CLI:

Terminal window
export HCLOUD_TOKEN=your-token-here
ksail cluster init \
--name my-cluster \
--distribution Talos \
--provider Hetzner
ksail cluster create

ksail.yaml:

apiVersion: ksail.io/v1alpha1
kind: Cluster
spec:
cluster:
distribution: Talos
provider: Hetzner

Prerequisite: Set HCLOUD_TOKEN to a Hetzner API token with read/write permissions for Load Balancers.

The Hetzner CCM provisions a real cloud load balancer with a public IP in 30–60 seconds (subject to Hetzner billing).

Terminal window
# Watch for EXTERNAL-IP assignment (takes 30–60 seconds)
kubectl get svc my-app -w
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# my-app LoadBalancer 10.32.45.100 135.181.10.50 80:32100/TCP 45s

You can customize Hetzner Load Balancer behavior using annotations:

apiVersion: v1
kind: Service
metadata:
name: my-app
annotations:
load-balancer.hetzner.cloud/location: nbg1
load-balancer.hetzner.cloud/use-private-ip: "true"
load-balancer.hetzner.cloud/health-check-interval: "15s"
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: my-app

See the Hetzner CCM documentation for all available annotations.

Change type: NodePort to type: LoadBalancer, remove nodePort fields, enable LoadBalancer if needed (see above), then wait for an external IP: kubectl get svc my-app -w.

Before (NodePort):

apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30080 # Static node port
selector:
app: my-app

After (LoadBalancer): Use the shared manifest above, accessible via http://<external-ip>:80.

Deploy nginx to verify LoadBalancer functionality:

Terminal window
kubectl create deployment nginx-test --image=nginx:1.25 --replicas=2
nginx-lb.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-lb
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: nginx-test
Terminal window
kubectl apply -f nginx-lb.yaml
EXTERNAL_IP=$(kubectl get svc nginx-lb -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl http://$EXTERNAL_IP

Symptom: Service shows <pending> for EXTERNAL-IP:

Terminal window
kubectl get svc
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# my-app LoadBalancer 10.96.1.50 <pending> 80:30123/TCP 5m

Diagnosis:

  1. Check if LoadBalancer is enabled:

    Terminal window
    cat ksail.yaml | grep -A 5 "loadBalancer"
  2. Verify the controller is running:

    • Vanilla: docker ps | grep ksail-cloud-provider-kind
    • Talos: kubectl get pods -n metallb-system
    • Hetzner: kubectl get pods -n kube-system | grep hcloud
  3. Check controller logs:

    • Vanilla: docker logs ksail-cloud-provider-kind
    • Talos: kubectl logs -n metallb-system -l app.kubernetes.io/component=controller
    • Hetzner: kubectl logs -n kube-system -l app=hcloud-cloud-controller-manager

Common Fixes:

  • LoadBalancer disabled: Re-initialize cluster with --load-balancer Enabled
  • Cloud Provider KIND not running: Delete and recreate cluster
  • MetalLB IP pool exhausted: Check available IPs in the pool (default: 51 IPs)
  • Hetzner token missing: Ensure HCLOUD_TOKEN is set during cluster creation

Symptom: Service has external IP but connection fails:

Terminal window
curl http://172.18.255.200
# curl: (7) Failed to connect to 172.18.255.200 port 80: Connection refused

Diagnosis:

  1. kubectl get pods -l app=my-app — verify pods are running
  2. kubectl get endpoints my-app — check endpoints have pod IPs
  3. kubectl logs -l app=my-app — review pod logs
  4. kubectl run test --rm --image=curlimages/curl -- sh -c 'curl http://my-app.default.svc.cluster.local' — test from within cluster

Common Fixes:

  • Pods not ready: Wait for pods to reach Running state
  • Wrong target port: Verify targetPort matches container port
  • Network policy blocking traffic: Check for restrictive NetworkPolicies
  • Application not listening: Verify app listens on correct port and 0.0.0.0

Symptom: New LoadBalancer services remain pending after several successful allocations:

Terminal window
kubectl get svc
# NAME TYPE EXTERNAL-IP PORT(S)
# app-1 LoadBalancer 172.18.255.200 80:30001/TCP
# ...
# app-52 LoadBalancer <pending> 80:30052/TCP

Diagnosis:

Check IPAddressPool status:

Terminal window
kubectl get ipaddresspool -n metallb-system default-pool -o yaml

Fix:

Expand the pool range or create an additional IPAddressPool in metallb-system:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: expanded-pool
namespace: metallb-system
spec:
addresses:
- 172.18.255.200-172.18.255.254 # Expanded from .250 to .254