Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Part 22: k8s-single — One VM Dev Cluster

"The smallest cluster that is still real Kubernetes. Sixteen gigabytes. Four minutes to boot."


Why

k8s-single is the daily driver for solo development. One VM, all-in-one, just enough to test workloads against a real Kubernetes API. Not enough to test multi-node failure modes (you have one node — when it goes down, everything goes down) but enough for nine out of ten dev tasks: kubectl apply, watch a pod schedule, hit it from your browser, iterate.

The thesis: k8s-single is one VM with the control plane taint removed, Flannel CNI, local-path CSI, and the standard ingress + cert-manager + external-dns + metrics-server stack. Everything from Act III applies; the only thing that changes is the topology resolver returns one machine instead of four.


The configuration

# config-homelab.yaml
name: dev
topology: single        # the HomeLab topology — one VM
k8s:
  distribution: k3s     # k3s is faster to install for one node
  topology: k8s-single  # the K8s topology
  version: "v1.31.4+k3s1"
  cni: flannel          # cheapest, no policy enforcement
  csi: local-path
  ingress: nginx
  k3s:
    disable_traefik: true
    disable_servicelb: true
vos:
  cpus: 4
  memory: 16384
  subnet: "192.168.62"
acme:
  name: dev
  tld: lab

Two topology fields: topology: single (the HomeLab Vagrant topology — one VM) and k8s.topology: k8s-single (the K8s topology, also one VM since they happen to be the same here). The two could differ in theory — the HomeLab Vagrant layout could have multiple VMs with k8s on only some — but for k8s-single they match.


The boot sequence

$ homelab init --name dev
$ cd dev
$ vim config-homelab.yaml          # set topology, k8s.distribution, etc.
$ homelab packer init              # generate the packer bundle
$ homelab packer build             # build alpine-3.21-k8snode-k3s box (~12 min on cold cache)
$ homelab box add --local
$ homelab vos init                 # generate Vagrantfile + config-vos.yaml
$ homelab vos up                   # boot the VM (~90 sec)
$ homelab dns add cluster.dev.lab 192.168.62.10
$ homelab tls init --provider native
$ homelab tls install
$ homelab tls trust
$ homelab k8s create               # bootstrap the cluster (~3 min for k3s)
$ homelab k8s apply                # install CNI, CSI, ingress, cert-manager, ...
$ homelab k8s use-context dev
$ kubectl get nodes
NAME       STATUS   ROLES                  AGE   VERSION
dev-main   Ready    control-plane,master   3m    v1.31.4+k3s1

End-to-end: about 20 minutes the first time (most of it is the Packer image build), about 6 minutes on subsequent recreations (the box is cached).


Removing the control-plane taint

By default, kubeadm and k3s taint control plane nodes with node-role.kubernetes.io/control-plane:NoSchedule. This prevents workloads from scheduling on the control plane. For multi-node clusters this is the right default; for k8s-single it is wrong, because the only node is the control plane.

The K8s.Dsl topology resolver from Part 08 marks the single-node VM with Role = "k8s-control-plane-and-worker". The cluster bootstrap saga reads this and runs:

kubectl taint nodes dev-main node-role.kubernetes.io/control-plane:NoSchedule-

…immediately after bootstrap. Workloads can now schedule on the single node.


What runs in 16 GB

Steady state of a k8s-single cluster with the standard stack:

Component RAM
OS + containerd + kubelet + kube-proxy ~400 MB
k3s server (apiserver + etcd + scheduler + controller-manager) ~700 MB
Flannel CNI agent ~30 MB
local-path-provisioner ~25 MB
CoreDNS ~30 MB
metrics-server ~25 MB
kube-state-metrics ~30 MB
nginx-ingress controller ~80 MB
cert-manager (+ webhook + cainjector) ~150 MB
external-dns ~20 MB
System overhead ~1.5 GB
Workloads ~14.5 GB

The 14.5 GB of workload space comfortably runs:

  • A medium-sized web app + database (~2 GB)
  • The kube-prometheus-stack from Part 31 if you want it (~3 GB)
  • ArgoCD (~1 GB)
  • A test workload that emits some metrics (~500 MB)
  • Headroom for builds, image pulls, log buffering

If you want to install GitLab on k8s-single, you have to drop other things. GitLab on Kubernetes is heavy (~6 GB minimum for the chart's defaults). For solo dev where the user already has GitLab in DevLab Docker, leaving GitLab off the K8s cluster is the right call.


What k8s-single does NOT cover

The known limitations:

  • No real multi-node failure modes. When the one node goes down, everything goes down. You cannot test PodDisruptionBudgets, NodeAffinity, anti-affinity, drain-and-reschedule.
  • No real Longhorn. local-path is fine for one node but does not exercise the replication, snapshot, or RWX paths a real CSI would.
  • No HA Kubernetes upgrades. kubeadm upgrade exercises the rolling control-plane upgrade flow; k3s re-install on one node does not.
  • No realistic ingress load balancing. One worker = one ingress endpoint. The host machine sees a single IP.

For these, switch to k8s-multi (Part 23) or k8s-ha (Part 24).


What this gives you that running k3s by hand doesn't

A user can install k3s by SSH-ing into a VM, running the curl install script, and copying the kubeconfig out. That works. It also produces a cluster that does not have nginx-ingress, cert-manager, external-dns, metrics-server, or any of the other components K8s.Dsl installs as part of homelab k8s apply. The user has to install each one by hand, configure each one separately, and remember to do it again next time.

The k8s-single topology gives you, for the same surface area:

  • One config field to declare it
  • All the components from Act III auto-installed via Helm release contributors
  • The same wildcard cert as the rest of HomeLab Docker
  • 20 minutes to a working cluster the first time, 6 minutes thereafter
  • A daily-driver topology that fits in 16 GB

The bargain pays back the first day you reach for a Kubernetes cluster to test something and have one available in less time than it takes to make coffee.


⬇ Download