Why
The previous five parts (35–38) built the multi-client primitives. This part ties them together with a day in the life. A real freelancer juggling Acme, Globex, and a personal project shows how the verbs from Acts II–V combine into a smooth daily workflow.
The thesis: the multi-client workflow is not a sequence of complex commands. It is a small set of verbs (use-context, vos halt, vos up, cost report, k8s status) used 5–10 times a day. Each one takes seconds. The total context-switch tax across a workday is under 5 minutes.
08:30 — coffee, check overnight CI
$ homelab k8s use-context acme
✓ kubectl context switched to 'acme'
$ homelab k8s status
acme cluster (k8s-multi)
control plane: 1/1 ready
workers: 3/3 ready
pods: 47 running, 0 pending, 0 failed
recent backups: ✓ daily-acme-prod (2026-04-19 02:00, 1.2 GB)
cost yesterday: 4.2 kWh / €0.84
$ kubectl get pipelines.tekton.dev -n acme-dev # or whatever your CI tool is
NAME STATUS DURATION
acme-api-build-789 Success 5m12s
acme-api-build-790 Success 4m58s
$ # browse https://gitlab.acme.lab — see the overnight CI runs$ homelab k8s use-context acme
✓ kubectl context switched to 'acme'
$ homelab k8s status
acme cluster (k8s-multi)
control plane: 1/1 ready
workers: 3/3 ready
pods: 47 running, 0 pending, 0 failed
recent backups: ✓ daily-acme-prod (2026-04-19 02:00, 1.2 GB)
cost yesterday: 4.2 kWh / €0.84
$ kubectl get pipelines.tekton.dev -n acme-dev # or whatever your CI tool is
NAME STATUS DURATION
acme-api-build-789 Success 5m12s
acme-api-build-790 Success 4m58s
$ # browse https://gitlab.acme.lab — see the overnight CI runsTotal time: ~30 seconds. The freelancer knows acme is healthy and CI ran clean overnight.
09:00 — write code on Acme
The day's task: a new endpoint on Acme's API. Standard development loop. dotnet test locally, push to a feature branch, watch the runner inside the acme cluster build the image, ArgoCD reconciles dev, the new endpoint is reachable at https://api.acme.lab/v1/new-thing over HTTPS.
This is most of the day. The cluster is just there, indistinguishable from a remote dev environment, except faster (no network round-trip to a cloud) and private (no shared resources with other clients).
11:30 — switch to Globex for a quick fix
$ homelab k8s use-context globex
✓ kubectl context switched to 'globex'
$ homelab k8s status
globex cluster (k8s-ha)
control plane: 3/3 ready
workers: 3/3 ready
pods: 61 running, 0 pending, 0 failed
recent backups: ✓ daily-globex-prod (2026-04-19 02:00, 2.8 GB)
alerts: 1 firing — KafkaConsumerLagHigh on globex-stage
$ homelab k8s logs gateway --namespace globex-stage --tail 100
[2026-04-19 11:28:41] WARN c.g.k.consumer - Lag is 1247 messages, expected < 100
[2026-04-19 11:28:42] WARN c.g.k.consumer - Consumer falling behind
...
$ # discover the bug: the new consumer code has a tight retry loop
$ # fix in editor, commit, push
$ git push
✓ pushed to gitlab.globex.lab/frenchexdev/gateway$ homelab k8s use-context globex
✓ kubectl context switched to 'globex'
$ homelab k8s status
globex cluster (k8s-ha)
control plane: 3/3 ready
workers: 3/3 ready
pods: 61 running, 0 pending, 0 failed
recent backups: ✓ daily-globex-prod (2026-04-19 02:00, 2.8 GB)
alerts: 1 firing — KafkaConsumerLagHigh on globex-stage
$ homelab k8s logs gateway --namespace globex-stage --tail 100
[2026-04-19 11:28:41] WARN c.g.k.consumer - Lag is 1247 messages, expected < 100
[2026-04-19 11:28:42] WARN c.g.k.consumer - Consumer falling behind
...
$ # discover the bug: the new consumer code has a tight retry loop
$ # fix in editor, commit, push
$ git push
✓ pushed to gitlab.globex.lab/frenchexdev/gatewayThe runner inside Globex builds the image, ArgoCD reconciles, the new gateway pods roll out. The Kafka lag clears within 5 minutes.
Total time on Globex: ~25 minutes. Switching back to Acme is one verb.
13:00 — lunch
The clusters keep running. They use ~80 GB of RAM. The workstation is ~30% idle CPU when the clusters are not actively serving traffic.
14:00 — back to Acme
$ homelab k8s use-context acme
✓ kubectl context switched to 'acme'$ homelab k8s use-context acme
✓ kubectl context switched to 'acme'Continue the morning's work. The session is exactly where the freelancer left it.
16:00 — Globex paged again
$ homelab k8s use-context globex
$ homelab k8s status
globex cluster (k8s-ha)
control plane: 2/3 ready # ← one control plane is down
workers: 3/3 ready
pods: 61 running, 0 pending, 0 failed
alerts: 1 firing — KubeAPIDown on globex-cp-2$ homelab k8s use-context globex
$ homelab k8s status
globex cluster (k8s-ha)
control plane: 2/3 ready # ← one control plane is down
workers: 3/3 ready
pods: 61 running, 0 pending, 0 failed
alerts: 1 firing — KubeAPIDown on globex-cp-2The cluster is still working — the API is responsive because two control planes are quorate. A real production HA cluster would behave the same way. The freelancer investigates:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
globex-cp-1 Ready control-plane 12d v1.31.4
globex-cp-2 NotReady control-plane 12d v1.31.4
globex-cp-3 Ready control-plane 12d v1.31.4
globex-w-1 Ready <none> 12d v1.31.4
globex-w-2 Ready <none> 12d v1.31.4
globex-w-3 Ready <none> 12d v1.31.4
$ homelab vos status globex-cp-2
globex-cp-2: paused
$ # Investigate why — VirtualBox UI shows the VM was paused due to a host I/O error
$ # The disk on the host filled up (build cache spike from a Maven build earlier)
$ # Free up disk
$ docker system prune -af
$ homelab vos resume globex-cp-2
✓ globex-cp-2: resumed
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
...
globex-cp-2 Ready control-plane 12d v1.31.4 # back!$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
globex-cp-1 Ready control-plane 12d v1.31.4
globex-cp-2 NotReady control-plane 12d v1.31.4
globex-cp-3 Ready control-plane 12d v1.31.4
globex-w-1 Ready <none> 12d v1.31.4
globex-w-2 Ready <none> 12d v1.31.4
globex-w-3 Ready <none> 12d v1.31.4
$ homelab vos status globex-cp-2
globex-cp-2: paused
$ # Investigate why — VirtualBox UI shows the VM was paused due to a host I/O error
$ # The disk on the host filled up (build cache spike from a Maven build earlier)
$ # Free up disk
$ docker system prune -af
$ homelab vos resume globex-cp-2
✓ globex-cp-2: resumed
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
...
globex-cp-2 Ready control-plane 12d v1.31.4 # back!Total time on the incident: ~10 minutes. The HA topology meant the cluster did not actually go down — only one control plane was unavailable, the other two kept serving the API. This is exactly the bug class that k8s-ha exists to validate.
17:30 — switch to personal project
$ homelab k8s use-context personal
$ homelab k8s status
personal cluster (k8s-single)
node: 1/1 ready
pods: 12 running
uptime: 3 days$ homelab k8s use-context personal
$ homelab k8s status
personal cluster (k8s-single)
node: 1/1 ready
pods: 12 running
uptime: 3 daysTinker on a personal project for an hour. Push to the personal GitLab. ArgoCD reconciles. The personal project is actually the test bed for new K8s.Dsl features the freelancer wants to try before bringing them to clients.
19:00 — done
$ homelab cost report --since 2026-04-19T08:00:00Z
Cost report for all instances (today)
─────────────────────────────────────
acme: 6.8 kWh / €1.36
globex: 8.2 kWh / €1.64
personal: 1.4 kWh / €0.28
─────────────────────────────────────
Total: 16.4 kWh / €3.28$ homelab cost report --since 2026-04-19T08:00:00Z
Cost report for all instances (today)
─────────────────────────────────────
acme: 6.8 kWh / €1.36
globex: 8.2 kWh / €1.64
personal: 1.4 kWh / €0.28
─────────────────────────────────────
Total: 16.4 kWh / €3.28The freelancer logs the cost as a billable expense to the relevant client. (The personal cluster is on her own dime.) Three clients, one workstation, full visibility into per-client cost.
What the day did NOT involve
- Spinning down a cluster to make room for another one. All three were up the whole day.
- Recreating a cluster because the previous client's state was in the way. Each cluster had its own state.
- Wrong-context kubectl mistakes. The shell prompt always showed the active context.
- Cross-client visibility leaks. Acme and Globex saw nothing of each other.
- Cloud bills. Everything was local.
- Coordinating with anyone about which environment was free.
This is the "killer feature" payoff. The infrastructure does not get in the freelancer's way.
What this gives you that the alternative doesn't
The alternative is one cluster at a time, recreated as needed, on a single laptop. Or three laptops, one per client. The first is slow (5-10 minutes per switch); the second is expensive (€2,100+ in hardware) and adds context-switching tax across machines.
The HomeLab K8s multi-client workflow gives you, for the same surface area:
- Three clusters in parallel on one workstation
- Sub-5-second context switches via one verb
- Per-client cost visibility
- Per-client backups and restore tests
- Per-client dashboards
- Zero coordination overhead
The bargain pays back every day the freelancer has more than one client active.
End of Act VI
The multi-client story is complete: instance per client, 128 GB workstation, kubeconfig juggling, cross-client isolation, the daily workflow. From here, the series turns to day-2 operations (Act VII) — what happens when you upgrade, when you back up, when you restore, when you nuke and rebuild.