Part 04: Real K8s vs Toy K8s — kubeadm, k3s, k0s, the Rest
"There are dozens of ways to install Kubernetes. Most of them are wrong for dev. Two are right."
Why
Part 01 made the case that toy k8s does not cut it. This part is the matrix: which distributions are real enough to use as dev k8s, and which are not. The answer matters because the choice of distribution sets the upper bound on the realism of your dev environment, and getting it wrong is the difference between "I tested this on the same flavour of Kubernetes I run in production" and "I tested this on a simulation that lied to me about the things I most needed to know".
The thesis: two distributions are first-class for HomeLab K8s — kubeadm (the canonical real Kubernetes) and k3s (the lightweight real Kubernetes that is still real). k0s is mentioned as a stretch goal. Everything else is rejected as a simulation.
The matrix
| Distribution | Real Kubernetes? | RAM (one node) | Multi-node | Real CNI | Real CSI | Real upgrades | HomeLab support |
|---|---|---|---|---|---|---|---|
| kubeadm | Yes | ~1.5 GB | Yes | Yes (any) | Yes (any) | Yes (kubeadm upgrade) |
First-class |
| k3s | Yes | ~700 MB | Yes | Yes (Flannel default; any) | Yes (local-path default; any) | Yes (binary swap) | First-class |
| k0s | Yes | ~800 MB | Yes | Yes (any) | Yes (any) | Yes (k0s install) |
Stretch goal (plugin) |
| microk8s | Yes | ~1 GB | Yes | Yes (any) | Yes (any) | Yes (snap channels) | Plugin opportunity |
| Talos | Yes (immutable) | ~600 MB | Yes | Yes (Cilium-friendly) | Yes (any) | Yes (image swap) | Plugin opportunity |
| kind | No (simulation) | ~500 MB | Sort of | No (kindnet) | No (hostPath) | No (recreate) | Rejected |
| minikube | No (simulation) | ~2 GB | Sort of | Limited | Limited | Limited | Rejected |
| k3d | No (k3s in docker) | ~700 MB | Sort of | Limited | Limited | No (recreate) | Rejected |
| Docker Desktop K8s | No (single-node toy) | ~2 GB | No | No | No | No | Rejected |
| Rancher Desktop | Borderline (k3s in VM) | ~2 GB | No | Limited | Limited | Limited | Rejected |
The line between "real" and "simulation" is sharp: a distribution is real if it runs on real Linux nodes (VMs or bare metal) with the standard Kubernetes binaries (kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy, real etcd) or their direct equivalents (k3s server, k0s controller), and produces a cluster that behaves indistinguishably from production for the things this series cares about: CNI choice, CSI choice, ingress, network policies, multi-node scheduling, real upgrades.
A distribution is a simulation if it fakes any of those — kindnet instead of real CNI, hostPath instead of real CSI, container-as-node instead of real Linux node, etc.
Why kubeadm is first-class
kubeadm is the official Kubernetes installer. It runs on any Linux distribution with containerd (or any other CRI runtime) and the kubernetes apt/rpm packages. The cluster it produces is as close to "what you would run in production" as it gets without using a managed service. Every kubeadm cluster is the same cluster, with the same components, the same upgrade flow, the same etcd, the same API server flags. If you can run a workload on a kubeadm cluster, you can run it on any kubeadm-derived cluster — which means most of GKE-mode, EKS-mode, AKS-mode, OpenShift, k0s, Talos, and many cloud-provider distributions.
Three things make kubeadm the canonical choice:
- The upgrade flow is real.
kubeadm upgrade plantells you what's available,kubeadm upgrade applywalks the steps, and the failure modes are the same ones you will hit in production. This is the body of operational knowledge you want to learn, and you can only learn it on a real kubeadm cluster. - HA is officially supported. kubeadm has a documented HA Reference Architecture with stacked etcd, external etcd, and load-balanced API server VIPs. Three control planes, three workers, kube-vip or HAProxy in front. We use this in Part 24.
- Component flags are exposed. Every API server flag, scheduler flag, controller manager flag is configurable via the
ClusterConfigurationandKubeletConfigurationfiles. You can replicate any production cluster's configuration verbatim — same admission controllers, same audit policy, same feature gates.
The downside: kubeadm is heavy. Each control plane node uses ~1.5 GB of RAM in steady state, ~3 GB in burst. The bootstrap takes ~5 minutes for the first node and ~2 minutes for each additional one. The kubeadm init command has a long argument list that the K8s.Dsl plugin will encode in typed form (see Part 14).
Why k3s is also first-class
k3s is a lightweight Kubernetes distribution from Rancher (now SUSE). Single binary, ~50 MB, packs the API server, the controller manager, the scheduler, and a built-in etcd or SQLite into one process. Defaults to Flannel as CNI and local-path as CSI. Multi-node out of the box. Real Kubernetes API.
The reasons k3s is first-class for HomeLab K8s:
- It is real Kubernetes. k3s passes the Cloud Native Computing Foundation conformance tests. The API is identical to upstream Kubernetes. Your manifests, your operators, your Helm charts, your kubectl muscle memory all work.
- It fits in less RAM. A k3s control plane uses ~700 MB instead of ~1.5 GB. A k3s
k8s-hatopology fits in ~30 GB instead of ~48 GB. For a freelancer with three clients on a 64 GB box, k3s is what makes the math work. - The upgrade flow is dead simple.
curl -sfL https://get.k3s.io | sh -reinstalls the binary. The system-upgrade-controller automates rolling upgrades across nodes. None of the kubeadm complexity. - It is what most edge / IoT k8s deployments use. If your production target is k3s (and many serious teams have moved to it), then your dev cluster should be k3s for fidelity.
The downside: k3s makes some opinionated choices that production-equivalent kubeadm clusters do not. Traefik is bundled by default (you can disable it with --disable=traefik). servicelb is bundled (--disable=servicelb). The metrics-server is bundled. None of these are wrong choices, but they are choices, and if your production cluster does not use them, you have to disable them on dev to match.
The K8s.Dsl plugin's K3sClusterDistribution configures these options via typed flags. The user picks them once in config-homelab.yaml, and the cluster comes up with exactly the same set of opinionated bundles disabled / enabled as production.
Why k0s is a stretch goal
k0s is a third "real Kubernetes" distribution from Mirantis. Like k3s, it is a single binary. Like kubeadm, it does not bundle opinionated extras. It is the closest thing to "kubeadm in single-binary form".
We treat k0s as a stretch goal for HomeLab K8s v1 because:
- The k0s installer is well-designed but its multi-node flow is less battle-tested than kubeadm or k3s
- The community is smaller, which means fewer Stack Overflow answers when something breaks
- The configuration surface is different from kubeadm, so a fresh
IClusterDistributionimplementation has to be written
When K8s.Dsl ships, k0s is one new [Injectable] IClusterDistribution away from being supported. Until then, it is mentioned in the docs and the user can pick kubeadm or k3s.
Why kind, minikube, k3d are rejected
These are all rejected for the same reason: they fake at least one of the things HomeLab K8s wants to be real. Specifically:
kind
- kindnet is a custom CNI that does not enforce NetworkPolicies. If you want to test policies, you have to install Calico/Cilium inside kind, which works but is fragile and requires tearing down kindnet first.
- hostPath provisioner is the default storage class. It does not support
ReadWriteMany, snapshots, expansion, or topology. - kind clusters are docker containers, not VMs. The "node" abstraction leaks: you cannot ssh into a kind node, you cannot reboot one, you cannot install kernel modules on one, you cannot test what happens when one runs out of memory under real Linux memory pressure.
- No real upgrade flow — you delete the cluster and recreate it.
minikube
- Comes with a long list of "drivers" (docker, hyperkit, virtualbox, kvm2, none). The driver determines what kind of "node" you get. Most drivers run a single VM with all components. Some drivers run as containers. The semantics differ per driver.
- The
minikube tunnelcommand is required forLoadBalancerservices to work, and it needs root and breaks on certain platforms. - Multi-node mode exists (
--nodes=N) but the nodes are still on the same docker network and the failure modes are wrong.
k3d
- k3d runs k3s nodes as docker containers. It inherits k3s's "real Kubernetes API" property (good) and adds the "container-as-node" anti-property (bad).
- Same limitations as kind: no real ssh, no real reboot, no real kernel modules.
- The Traefik bundled by default exposes itself on docker port mappings, which collide if you run multiple k3d clusters on the same host without careful port allocation.
Docker Desktop / Rancher Desktop
- Single-node toy clusters bundled with the desktop tool. Adequate for
kubectl get podstutorials and nothing else. - No multi-node, no real CNI choice, no upgrade rehearsal. Useful for demos, useless for real work.
These tools are not bad. They are bad for the use case this series cares about. If your use case is "I want a place to run kubectl tutorials", kind is great. If your use case is "I want to develop a real workload that will run on real production Kubernetes, with the same CNI, the same CSI, the same Ingress, the same upgrade story", these tools fail you.
The pluggability angle
K8s.Dsl's IClusterDistribution interface lets a third party ship a distribution as a plugin without forking K8s.Dsl. This means:
FrenchExDev.HomeLab.Plugin.K8sDsl.K0scould ship aK0sClusterDistributionas a pluginFrenchExDev.HomeLab.Plugin.K8sDsl.Taloscould ship aTalosClusterDistributionfor the immutable-image crowdFrenchExDev.HomeLab.Plugin.K8sDsl.Microk8scould ship aMicrok8sClusterDistributionfor Ubuntu shops
Each plugin is one new [Injectable] class implementing IClusterDistribution, plus a Packer overlay for the node image, plus possibly a verb extension. Same pattern as the rest of the HomeLab plugin system. The K8s.Dsl plugin core ships Kubeadm and K3s; everything else is community.
What this gives you that toy k8s doesn't
The matrix above is the answer. Toy k8s gives you "kubectl works against something". Real k8s — kubeadm or k3s — gives you "kubectl works against the same thing you will run in production, with the same CNI, the same CSI, the same upgrade flow, the same multi-node failure modes, the same network policies, the same persistent volumes, the same ingress, the same certificates."
The cost is some RAM (16-48 GB depending on topology) and some setup time (~30 minutes for the first cluster). Both costs are bounded and the bound is small. The benefit — every production-similar bug caught in dev — is unbounded and the upper bound is "your largest production incident".