Skip to content

Talos

Talos Linux is a minimal, immutable operating system designed specifically for running Kubernetes. It provides enhanced security through API-driven configuration with no shell access, automatic updates, and a reduced attack surface. This guide shows you how to use Talos with KSail for local development (Docker provider), cloud deployments (Hetzner Cloud provider), or managed clusters through Sidero Omni (Omni provider).

Talos is ideal for security-focused production workloads, GitOps workflows, and multi-cloud deployments requiring immutable infrastructure. It’s not suitable for quick prototyping or scenarios requiring shell access—use Vanilla or K3s instead.

Create a Talos cluster on your local machine using Docker containers as nodes.

Docker installed and running (see Docker Provider).

Terminal window
ksail cluster init \
--name talos-dev \
--distribution Talos \
--provider Docker \
--control-planes 1 \
--workers 2
Terminal window
ksail cluster create
Terminal window
ksail cluster info
kubectl get nodes
kubectl get pods -A
Terminal window
ksail cluster delete

Talos excels at immutable infrastructure, security-focused production deployments, and multi-environment consistency—the same distribution works from local Docker development through Hetzner Cloud production without “works on my machine” issues. See Use Cases for practical workflow examples.

Common Talos API operations:

Terminal window
talosctl -n <node-ip> get machineconfig # View configuration
talosctl -n <node-ip> version # Check Talos version
talosctl -n <node-ip> logs # System logs
talosctl -n <node-ip> upgrade --image ghcr.io/siderolabs/installer:v1.6.0

Talos node architecture varies by provider. See the Docker Provider, Hetzner Provider, and Omni Provider pages for details.

See the Support Matrix for a full breakdown of feature and component compatibility across all distributions.

Check Docker status (docker ps, docker network ls), verify HCLOUD_TOKEN for Hetzner, or try cleaning up and retrying with ksail cluster delete && ksail cluster create.

Check CNI pods are running (kubectl get pods -n kube-system and look for your CNI pods, e.g. cilium- or calico-), verify Talos health (talosctl -n <node-ip> health), or reinstall CNI with ksail cluster update.

Verify MetalLB is enabled in ksail.yaml (loadBalancer: Enabled), check MetalLB pods (kubectl get pods -n metallb-system), and verify IP pool exists (kubectl get ipaddresspools -n metallb-system). On macOS, Docker runs in a Linux VM so MetalLB virtual IPs are not routable from the host—use extraPortMappings instead (see Port Mappings (Docker Provider)).

Check ~/.talos/config exists, verify node IPs with kubectl get nodes -o wide, and use explicit node IP with talosctl -n <node-ip> --talosconfig ~/.talos/config get members.

Adjust control plane and worker nodes in your existing ksail.yaml (requires distribution: Talos):

# Partial snippet — add to your existing ksail.yaml
spec:
cluster:
distribution: Talos
talos:
controlPlanes: 3 # HA setup
workers: 5

On macOS, Docker runs in a Linux VM, so MetalLB virtual IPs are not directly accessible from the host. For Talos clusters using the Docker provider only, use extraPortMappings in ksail.yaml to expose container ports on the host (Hetzner and Omni Talos clusters do not use Docker port mappings):

# Partial snippet — add to your existing ksail.yaml
spec:
cluster:
distribution: Talos
talos:
extraPortMappings:
- containerPort: 80
hostPort: 8080
protocol: TCP
- containerPort: 443
hostPort: 8443
protocol: TCP

Access services at http://localhost:<hostPort> (for the example above, http://localhost:8080). Ports are applied to the first control-plane node only—in multi-control-plane clusters, additional control-plane nodes do not receive port mappings to avoid Docker host port collisions. See the Declarative Configuration reference for the full field specification.

For cloud volumes, use the hcloud-volumes storage class installed automatically by the Hetzner Provider.

Upgrade without cluster recreation: talosctl -n <node-ip> upgrade --image ghcr.io/siderolabs/installer:v1.6.0. See Talos upgrade docs for coordination details.

Talos 1.13 introduced ImageVerificationConfig, which enforces machine-wide container image signature verification before any image is pulled. KSail can scaffold a starter configuration:

Terminal window
ksail cluster init \
--distribution Talos \
--image-verification Enabled

This generates talos/cluster/image-verification.yaml with a default skip-all rule and commented examples. The file is a valid Talos config document that KSail applies alongside your MachineConfig during cluster creation.

# Talos ImageVerificationConfig (Talos 1.13+)
# This document enables machine-wide container image signature verification.
# Rules are evaluated in order; the first matching rule applies.
# See: https://www.talos.dev/v1.13/talos-guides/configuration/image-verification/
apiVersion: v1alpha1
kind: ImageVerificationConfig
rules:
# Default: skip verification for all images.
# Remove or modify this rule and add specific verification rules below.
- image: "*"
skip: true
# Example: Verify registry.k8s.io images using keyless (Cosign/OIDC) verification
# - image: "registry.k8s.io/*"
# keyless:
# issuer: "https://accounts.google.com"
# subject: "krel-trust@k8s-releng-prod.iam.gserviceaccount.com"
# Example: Verify images from a private registry using a public key
# - image: "my-registry.example.com/*"
# publicKey:
# certificate: |
# -----BEGIN CERTIFICATE-----
# <your PEM-encoded certificate here>
# -----END CERTIFICATE-----
# Example: Deny all images from an untrusted registry
# - image: "untrusted-registry.example.com/*"
# deny: true

Edit the file to enforce your signature policy, then run ksail cluster create. Rules are evaluated in order; the first matching rule applies.

See the Talos image verification docs for the full rule schema.

Enable Flux or ArgoCD for declarative workload management—see GitOps Workflows.