Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Part 03: Multi-Client Isolation as the Killer Feature

"The day you take on a second client is the day you wish your dev environment had isolation primitives. The day you take on a third client is the day you wish you had built it years earlier."


Why

The dominant story about Kubernetes-on-a-laptop is solo: one developer, one project, one cluster. The dominant story about Kubernetes multi-tenancy is the opposite extreme: one production cluster, hundreds of namespaces, RBAC and quotas separating tenants. Both are interesting. Neither is the freelancer's problem.

The freelancer's problem is: you have two clients, Acme Corp and Globex. Acme is a .NET shop with strict change control; Globex is a Spring Boot shop that ships twice a day. Both want a real Kubernetes for you to develop against. Both expect you to have one. Both are paying you, and neither one wants to know about the other — no shared infrastructure, no cross-pollination, no risk of you accidentally running an Acme migration against a Globex Postgres.

You could solve this with two separate workstations. Some freelancers do; the second machine pays for itself within a few weeks of contract overlap. But two machines is heavy: two builds of every tool, two browsers, two IDEs, two sets of bookmarks, two kubeconfigs, two SSH agents. The context switch is large enough that you start avoiding it, and avoidance is the first step toward letting one client's environment leak into the other.

The thesis of this part is: multi-client isolation, done right, is the killer feature of HomeLab K8s for the freelancer. One workstation, multiple HomeLab instances, one cluster per instance, structural isolation at every level — not "we use namespaces and trust each other", but "the architecture refuses to let one client touch another because the names cannot collide".


What "isolation" actually means here

A naive answer: "each client gets their own Kubernetes namespace". Wrong. Namespaces inside one cluster do not isolate against:

  • Privileged pods that mount the host's /
  • Broken NetworkPolicy defaults letting pods talk across namespaces
  • Resource quotas that the cluster admin (you) can override
  • Shared etcd, shared API server, shared CRDs, shared cluster roles
  • A bug in cert-manager that issues a wildcard for one namespace and reuses it in another
  • The temptation to "just for one minute" run something across namespaces

A correct answer: each client is a separate cluster. Different etcd, different API server, different CRDs, different node pools. The two clusters share nothing except the host hypervisor, and even that is mediated by Vagrant, which gives each cluster its own VirtualBox VMs with its own private network.

This is what HomeLab K8s gives you. Each client = one HomeLab instance = one Vagrant project = one VirtualBox network = one Kubernetes cluster = one kubeconfig context = one CA. Five layers of separation, four of them enforced by the host stack and one of them enforced by HomeLab's instance registry.


Layer 1 — Hypervisor / VMs

Each client runs its own Vagrant project. The VMs for acme and globex are different VirtualBox machines on different host-only networks (192.168.60.0/24 and 192.168.61.0/24). Killing one client's VMs leaves the other client's VMs untouched. There is no shared VM, no shared kernel, no shared filesystem.

Layer 2 — Kubernetes API server + etcd

Each client has its own etcd cluster (single-node in k8s-single/k8s-multi, three-node in k8s-ha). Each client has its own API server. The two clusters do not share state. A kubectl apply -f against acme cannot reach globex because the API endpoints are different (api.acme.lab vs api.globex.lab) and the certificates are signed by different CAs.

Layer 3 — Kubeconfig context

The user's ~/.kube/config has multiple contexts. Each HomeLab instance owns one context (acme, globex, personal). homelab k8s use-context acme switches the active context. We will see the IKubeconfigStore interface in Part 09. The point of this part is: the contexts cannot accidentally point at each other because they reference different cluster certificates and different API server endpoints.

Layer 4 — DNS

gitlab.acme.lab resolves to Acme's gateway IP. gitlab.globex.lab resolves to Globex's gateway IP. Two different hostnames, two different IPs, two different certificates. PiHole (or /etc/hosts) returns the right IP for the right hostname. The architecture refuses to give two clients the same hostname suffix because the instance registry refuses to give two clients the same tldPrefix.

Layer 5 — Certificates

Each client has its own CA. HomeLab CA - acme signs *.acme.lab. HomeLab CA - globex signs *.globex.lab. Both CAs are in your OS trust store after homelab tls trust --instance acme and homelab tls trust --instance globex, but they are independent — revoking Acme's CA leaves Globex's CA intact. The pattern is the same one from homelab-docker Part 51.


What the user sees

$ homelab instance list
NAME         TOPOLOGY    SUBNET           STATUS    UPTIME
acme         k8s-multi   192.168.60       running   2d 14h
globex       k8s-multi   192.168.61       running   2d 14h
personal     k8s-single  192.168.62       running   3h 22m

$ homelab k8s use-context acme
✓ kubectl context switched to 'acme'

$ kubectl get nodes
NAME           STATUS   ROLES           AGE     VERSION
acme-cp-1      Ready    control-plane   2d14h   v1.31.4
acme-w-1       Ready    <none>          2d14h   v1.31.4
acme-w-2       Ready    <none>          2d14h   v1.31.4
acme-w-3       Ready    <none>          2d14h   v1.31.4

$ homelab k8s use-context globex
✓ kubectl context switched to 'globex'

$ kubectl get nodes
NAME             STATUS   ROLES           AGE     VERSION
globex-cp-1      Ready    control-plane   2d14h   v1.31.4
globex-w-1       Ready    <none>          2d14h   v1.31.4
globex-w-2       Ready    <none>          2d14h   v1.31.4
globex-w-3       Ready    <none>          2d14h   v1.31.4

Two kubectl get nodes invocations against two different real clusters. No namespaces, no shared infrastructure, no chance of confusion. The wrong-cluster mistake — running kubectl delete deployment payments against acme when you meant globex — is still possible (it is a human mistake, not an architectural one), but at least the active context is visible in the prompt and the cluster names cannot collide.

The standard fix for the wrong-cluster mistake is kube-ps1 or kubectx showing the active context in your shell prompt. HomeLab K8s's homelab k8s use-context integrates with both: after switching, it prints the new context and emits a KubeconfigContextSwitched event so any prompt watcher can pick it up.


The architecture rule that makes it real

The most important architectural rule of multi-client isolation is: the instance registry refuses overlapping subnets. If you have acme on 192.168.60.0/24 and try to create another instance also on 192.168.60.0/24, the registry returns Result.Failure("subnet 192.168.60.0/24 already in use by instance 'acme'"). There is no override flag. The only way to put two clusters on the same subnet is to first destroy one of them and release its registry entry.

This sounds defensive. It is defensive. The reason is that the cost of two clusters silently sharing a subnet is much higher than the cost of being forced to pick a different one. A shared subnet means: VMs from one client can ping VMs from another client, packets can leak between Docker networks if you misconfigure the bridge, and the wrong cluster could end up serving traffic for the wrong hostname. A registry that says no is the cheapest possible defence.

The instance registry is at ~/.homelab/instances.json. Each entry has:

{
  "name": "acme",
  "subnet": "192.168.60.0/24",
  "tldPrefix": "acme",
  "createdAt": "2026-04-10T08:30:00Z"
}

The registry is read at the start of every homelab invocation. The acquire-and-allocate path is from homelab-docker Part 51; we extend it for k8s with no changes — k8s clusters just happen to be in HomeLab instances, and the instance registry does not care what runs inside an instance.


What this gives you that toy k8s doesn't

A toy cluster gives you one cluster at a time. Switching between clients means destroying the old cluster, recreating the new one, waiting 5 minutes for it to come up, restoring its state from somewhere, and repeating in reverse when you switch back. Most freelancers I have asked do not bother — they just keep one cluster around for whichever client is most active that week and accept that the other client's environment is "down" for the duration.

Multi-client isolation via HomeLab instances gives you, for the same surface area:

  • Three clusters running in parallel on a 128 GB workstation
  • One CLI verb (homelab k8s use-context <name>) to switch between them
  • Five layers of structural isolation — VMs, API server, kubeconfig, DNS, certs
  • A registry that refuses overlapping subnets so the architecture cannot drift into ambiguity
  • Per-instance cost tracking (homelab-docker Part 47) so you know how much each client is costing you in CPU-hours and electricity
  • Per-instance backup so you can restore one client without touching the other

The bargain pays back the first day you have two clients in the same week and you switch from one to the other in 4 seconds (use-context plus a re-source of your prompt) instead of 25 minutes (destroy and recreate).


⬇ Download