HomeLab K8s: Making HomeLab a Real Dev-K8s Enabler with Multi-Client Isolation
"The freelancer with 128 GB of RAM should be able to host three real Kubernetes clusters on her workstation, one per client, with no overlap. The substrate exists. We just have to write the plugin."
Why this series exists
The previous series, HomeLab Docker, built a CLI-first meta-orchestrator for local infrastructure on Docker Compose. It ended with Part 55, where one item was deferred: HA GitLab on Kubernetes — which would require a K8s.Dsl C# library that did not yet exist.
This series turns that deferral into a real spec. We design K8s.Dsl as a HomeLab plugin, write the architecture, walk the topologies, build the cluster, deploy the workloads, and put two fictional clients (Acme and Globex) on the same workstation without their clusters touching each other.
The framing is concrete:
- Real k8s, not toy. kind, minikube, and k3d are out of scope. We target kubeadm (the canonical real Kubernetes) and k3s (the lightweight alternative). Both run on Vagrant VMs HomeLab provisions.
- 64 GB RAM is the user's machine. Every topology is sized to fit in 64 GB. The HA topology — three control planes, three workers — fits in about 48 GB, leaving headroom for the IDE, the browser, and a Slack tab.
- 128 GB is the professional consultant's rig. With 128 GB you can run two or three of these clusters in parallel — one for Acme, one for Globex, one for whatever-came-in-this-week. Each client gets its own HomeLab instance, its own subnet, its own kubeconfig, its own DNS namespace. They cannot collide because the architecture refuses to let them.
- K8s.Dsl is a plugin to HomeLab, not a fork. Everything we build slots into the existing
IHomeLabPlugincontract fromhomelab-dockerPart 10. The HomeLab core does not change. - The plugin is shipped as two NuGets, mirroring HomeLab's own thin-CLI / fat-lib split —
FrenchExDev.HomeLab.Plugin.K8sDsl(the lib) andFrenchExDev.HomeLab.Plugin.K8sDsl.Cli(the verb shells). The Lib has noSystem.CommandLinereference. The Cli has no business logic. The architecture test enforces both. - The plugin extends the
homelabCLI itself. After the plugin is loaded,homelab --helpshows ak8sverb group sibling tovos,compose, andtls, with sub-commands likehomelab k8s init,homelab k8s create,homelab k8s argocd init,homelab k8s upgrade. No marker tells the user thatk8scame from a plugin — the pluggability is invisible at the consumer surface. - ArgoCD is first-class. Workloads are deployed via ArgoCD, not via raw
kubectl apply. HomeLab generates the GitOps repository structure, commits the typed Application manifests, and pushes to a GitLab repo it provisioned in DevLab. Yet another dogfood loop. - Same toolbelt.
[Injectable],Result<T>,Builder,Guard,Clock,FiniteStateMachine,Saga,Reactive,Options,Mapper,Mediator,Outbox,BinaryWrapper,GitLab.Ci.Yaml,Alpine.Version,Requirements,QualityGate,Ddd,HttpClient,Dsl— all 19 in-house libraries from the FrenchExDev toolbelt are mandatory. K8s.Dsl reinvents nothing.
If you have not read HomeLab Docker, Part 05 below gives you the recap you need to follow this series. Everything else assumes the substrate exists.
The thesis in one sentence
K8s.Dsl turns HomeLab into a tool that provisions real Kubernetes clusters for dev work, with each client isolated as its own HomeLab instance, with the entire cluster lifecycle expressed as typed C# and the entire CLI surface auto-extended via the plugin contract.
Hardware budget
Every topology in this series is sized against a real machine. Here is the budget table:
| Topology | VMs | vCPUs | RAM | Disk | Use case |
|---|---|---|---|---|---|
k8s-single |
1 | 4 | ~16 GB | 80 GB | Solo dev, fast iteration, kubeadm-on-one-node |
k8s-multi |
4 | 12 | ~32 GB | 200 GB | Realistic: 1 control plane + 3 workers, supports rolling deploys, real Longhorn |
k8s-ha |
6+ | 18+ | ~48 GB | 300 GB | HA Reference Architecture: 3 control planes + 3+ workers, kube-vip API VIP, etcd quorum |
A 64 GB workstation comfortably runs one k8s-ha cluster plus the host OS plus the dev IDE plus a browser. A 128 GB workstation runs two or three clusters in parallel — one per client. The math is simple: 48 + 48 + 16 (host) = 112 GB, leaving 16 GB of headroom on a 128 GB box.
The single biggest constraint is RAM, not CPU or disk. CPUs are usually idle (k8s control planes do not consume much when nothing is happening); disks are cheap. RAM is what limits how many parallel clusters you can host.
Architecture cheat-sheet
┌──────────────────────────────────────────────────────────────────┐
│ HomeLab.Cli (unchanged from homelab-docker) │
│ homelab vos / compose / tls / dns / gitlab │
│ + k8s (added by the K8s.Dsl plugin) │
└─────────────────────────┬────────────────────────────────────────┘
│ HomeLabRequest
v
┌──────────────────────────────────────────────────────────────────┐
│ HomeLab (lib, unchanged core) │
│ IHomeLabPipeline, IPluginHost, IHomeLabEventBus │
└──────────┬─────────────────────────────────┬─────────────────────┘
│ │
│ loads as plugin │ uses
v v
┌─────────────────────┐ ┌────────────────────────────────┐
│ K8s.Dsl plugin │ │ Ops.Dsl │
│ (NEW — this series) │ ──────> │ (substrate, shared) │
│ │ │ │
│ Lib NuGet: │ │ Ops.Infrastructure │
│ IK8sManifestContrib │ Ops.Deployment │
│ IHelmReleaseContrib │ Ops.Networking │
│ IClusterDistribution │ Ops.Security │
│ IKubeconfigStore │ Ops.Observability │
│ IK8sTopologyResolver │ Ops.DataGovernance │
│ IGitOpsRepoGenerator │ Ops.Configuration │
│ IArgoCdAppContributor │ Ops.Resilience │
│ Kubernetes.Bundle │ │
│ KubectlClient │ + new: Ops.Cluster │
│ HelmClient │ │
│ KubeadmClient │ │
│ K3sClient │ │
│ │ │ │
│ Cli NuGet: │ │ │
│ K8sVerbGroup │ │
│ K8sInitCommand │ │
│ K8sCreateCommand │ │
│ K8sNodeAddCommand │ │
│ K8sArgoCdInitCommand │ │
│ ... (~14 verb shells) │ │
└─────────────────────┘ └────────────────────────────────┘┌──────────────────────────────────────────────────────────────────┐
│ HomeLab.Cli (unchanged from homelab-docker) │
│ homelab vos / compose / tls / dns / gitlab │
│ + k8s (added by the K8s.Dsl plugin) │
└─────────────────────────┬────────────────────────────────────────┘
│ HomeLabRequest
v
┌──────────────────────────────────────────────────────────────────┐
│ HomeLab (lib, unchanged core) │
│ IHomeLabPipeline, IPluginHost, IHomeLabEventBus │
└──────────┬─────────────────────────────────┬─────────────────────┘
│ │
│ loads as plugin │ uses
v v
┌─────────────────────┐ ┌────────────────────────────────┐
│ K8s.Dsl plugin │ │ Ops.Dsl │
│ (NEW — this series) │ ──────> │ (substrate, shared) │
│ │ │ │
│ Lib NuGet: │ │ Ops.Infrastructure │
│ IK8sManifestContrib │ Ops.Deployment │
│ IHelmReleaseContrib │ Ops.Networking │
│ IClusterDistribution │ Ops.Security │
│ IKubeconfigStore │ Ops.Observability │
│ IK8sTopologyResolver │ Ops.DataGovernance │
│ IGitOpsRepoGenerator │ Ops.Configuration │
│ IArgoCdAppContributor │ Ops.Resilience │
│ Kubernetes.Bundle │ │
│ KubectlClient │ + new: Ops.Cluster │
│ HelmClient │ │
│ KubeadmClient │ │
│ K3sClient │ │
│ │ │ │
│ Cli NuGet: │ │ │
│ K8sVerbGroup │ │
│ K8sInitCommand │ │
│ K8sCreateCommand │ │
│ K8sNodeAddCommand │ │
│ K8sArgoCdInitCommand │ │
│ ... (~14 verb shells) │ │
└─────────────────────┘ └────────────────────────────────┘K8s.Dsl does not modify HomeLab core. Everything new lives in the plugin's two NuGets. The plugin is loaded by IPluginHost at startup; its services are registered via the existing [Injectable] source generator's AddFromAssembly helper; its CLI verbs are discovered by HomeLab's command builder scanning plugin assemblies for IHomeLabVerbGroup and [VerbGroup("k8s")] types.
Multi-client isolation
This is the killer feature. Three clients on one workstation:
Three HomeLab instances. Three subnets. Three kubeconfig contexts (acme, globex, personal). Three sets of certs. Three GitLab instances. The freelancer switches between clients with homelab k8s use-context <name>. The architecture refuses to let one client see another — the subnet allocator (homelab-docker Part 51) refuses overlapping /24s.
Act II — The K8s.Dsl plugin architecture (parts 06–11)
- Part 06: The K8s.Dsl Spec — MetaConcepts for Cluster Resources
- Part 07: IK8sManifestContributor and IHelmReleaseContributor
- Part 08: The K8s Topology Resolver
- Part 09: Kubeconfig Management — IKubeconfigStore
- Part 10: Secrets Bridging from ISecretStore to k8s Secrets
- Part 11: The Plugin Manifest, the CLI Verb Plugin Surface, and the Two-NuGet Split
Act III — Building real k8s on Vagrant VMs (parts 12–21)
- Part 12: Choosing the Distribution — kubeadm vs k3s
- Part 13: The K8s Node Packer Image
- Part 14: kubeadm init — Bootstrapping the Control Plane
- Part 15: kubeadm join — The Workers
- Part 16: CNI — Flannel vs Calico vs Cilium
- Part 17: CSI — local-path vs Longhorn vs OpenEBS
- Part 18: Ingress Controller — nginx-ingress vs Traefik
- Part 19: cert-manager for TLS Automation
- Part 20: external-dns for the Wildcard
- Part 21: metrics-server and the Basics
Act V — The Client (parts 26–34)
- Part 26: Meet Acme and Globex
- Part 27: Namespace Strategy — dev / stage / prod
- Part 28: GitLab on K8s via the Helm Chart
- Part 29: Postgres with CloudNativePG
- Part 30: MinIO Operator
- Part 31: kube-prometheus-stack
- Part 32: Velero for Backup
- Part 33: ArgoCD and the GitOps Repo Generator
- Part 34: A Real Workload Deployment — Acme's .NET API End to End
How to read this series
Architects should read Act I + Act II + Part 33 (ArgoCD generator) + Part 35 (multi-client isolation) + Part 50 for the full vision.
Developers should read Acts II–V for the architecture, the cluster build, and the workload deployment.
SRE / Platform engineers should read Acts III–VII — host VMs, topologies, the client platform, isolation, day-2.
Plugin authors should read Act II in detail, especially Part 11 (the CLI verb plugin surface). The same pattern applies to any other DSL plugin you might want to ship.
Freelancers / consultants should read Act VI in particular — the multi-client workflow is what differentiates this series from every "spin up a k8s cluster" tutorial.
Prerequisites
- The HomeLab Docker series — at minimum Part 10, Part 11, Part 51, and Part 53. Part 05 of this series is a fast recap if you cannot read all of the previous one.
- The Ops DSL Ecosystem series for the Ops.Dsl substrate, especially Part 04 (Shared Primitives).
- Basic Kubernetes knowledge: pods, services, deployments, namespaces, ingress, persistent volumes. If you have used
kubectlagainst any cluster, you have enough. - A workstation with at least 32 GB of RAM (for the single-VM topology) or 64 GB (for everything else). 128 GB if you want to follow Act VI for real.
- VirtualBox / Hyper-V / Parallels / libvirt + Vagrant. Same prerequisites as
homelab-docker.
Related posts
- HomeLab Docker — the substrate
- Ops DSL Ecosystem — the typed operational vocabulary
- Typed Docker — the binary wrapper precedent
- Injectable DDD — the source generator behind the composition root
- The Loop — the dev-loop quality bar
- Builder Pattern — the
[Builder]source generator - Finite State Machine — the
FiniteStateMachinelibrary