Part 35: One HomeLab Instance per Client
"The substrate already has the pattern. K8s.Dsl just consumes it."
Why
Part 03 sold the multi-client story. homelab-docker Part 51 built the multi-instance primitive. This part connects the two: how exactly the K8s.Dsl plugin uses the existing instance registry, what gets named per-instance, and what the developer's ~/.homelab/instances.json looks like with three k8s clusters in it.
The thesis: K8s.Dsl does not invent multi-client isolation. It uses the same IInstanceRegistry from homelab-docker Part 51, applied to k8s clusters. One instance = one cluster = one Vagrant project = one subnet. The K8s-specific bits (kubeconfig context, cluster CA) inherit the instance scope automatically.
The instance registry, with k8s clusters
// ~/.homelab/instances.json
{
"instances": [
{
"name": "acme",
"subnet": "192.168.60.0/24",
"tldPrefix": "acme",
"createdAt": "2026-04-10T08:30:00Z",
"topology": "k8s-multi",
"k8s": {
"distribution": "kubeadm",
"version": "v1.31.4",
"kubeconfigContext": "acme",
"caName": "HomeLab CA - acme"
}
},
{
"name": "globex",
"subnet": "192.168.61.0/24",
"tldPrefix": "globex",
"createdAt": "2026-04-12T14:15:00Z",
"topology": "k8s-ha",
"k8s": {
"distribution": "kubeadm",
"version": "v1.31.4",
"kubeconfigContext": "globex",
"caName": "HomeLab CA - globex"
}
},
{
"name": "personal",
"subnet": "192.168.62.0/24",
"tldPrefix": "personal",
"createdAt": "2026-04-08T20:00:00Z",
"topology": "k8s-single",
"k8s": {
"distribution": "k3s",
"version": "v1.31.4+k3s1",
"kubeconfigContext": "personal",
"caName": "HomeLab CA - personal"
}
}
]
}// ~/.homelab/instances.json
{
"instances": [
{
"name": "acme",
"subnet": "192.168.60.0/24",
"tldPrefix": "acme",
"createdAt": "2026-04-10T08:30:00Z",
"topology": "k8s-multi",
"k8s": {
"distribution": "kubeadm",
"version": "v1.31.4",
"kubeconfigContext": "acme",
"caName": "HomeLab CA - acme"
}
},
{
"name": "globex",
"subnet": "192.168.61.0/24",
"tldPrefix": "globex",
"createdAt": "2026-04-12T14:15:00Z",
"topology": "k8s-ha",
"k8s": {
"distribution": "kubeadm",
"version": "v1.31.4",
"kubeconfigContext": "globex",
"caName": "HomeLab CA - globex"
}
},
{
"name": "personal",
"subnet": "192.168.62.0/24",
"tldPrefix": "personal",
"createdAt": "2026-04-08T20:00:00Z",
"topology": "k8s-single",
"k8s": {
"distribution": "k3s",
"version": "v1.31.4+k3s1",
"kubeconfigContext": "personal",
"caName": "HomeLab CA - personal"
}
}
]
}Three instances. Three subnets (60, 61, 62). Three TLD prefixes (acme, globex, personal). Three kubeconfig contexts. Three CAs. The registry refuses to create a fourth instance with subnet 192.168.60.0/24 because acme already has it.
What gets prefixed
Every cluster-scoped name gets the instance prefix:
| Resource | Without prefix | With prefix acme |
|---|---|---|
| Vagrant VM names | cp-1, w-1, ... |
acme-cp-1, acme-w-1, ... |
Cluster name (in kubeadm ClusterConfiguration) |
cluster.local |
acme.local |
| Kubeconfig context name | default |
acme |
| Kubeconfig cluster name | kubernetes |
acme |
| Kubeconfig user name | kubernetes-admin |
acme-admin |
| Cert CA common name | HomeLab CA |
HomeLab CA - acme |
| Wildcard cert SAN | *.lab |
*.acme.lab |
| DNS hostnames | gitlab.lab |
gitlab.acme.lab |
| MinIO tenant names | gitlab-minio |
unchanged (in-cluster, isolated by namespace) |
Resources inside the cluster (Namespaces, Deployments, Services, Helm releases) do not get prefixed, because they are already isolated by the cluster boundary. Two clusters can both have a gitlab namespace; they do not collide because they are in different clusters.
The prefix happens at the contributor level via the IInstanceScope interface from homelab-docker Part 51. Every K8s.Dsl contributor takes IInstanceScope from DI and uses scope.PrefixOf("cp-1") to produce the prefixed name.
The acquisition flow
$ homelab init --name acme --topology k8s-multi
✓ acquired instance 'acme' (subnet 192.168.60.0/24)
✓ wrote config-homelab.yaml
✓ wrote .vscode/settings.json with schema mappings
$ cd acme
$ homelab k8s create
✓ packer build (~12 min on cold cache)
✓ box add (local)
✓ vagrant up (4 VMs, ~3 min)
✓ DNS entries added (5 hostnames under acme.lab)
✓ TLS CA generated, wildcard issued
✓ kubeadm init on acme-cp-1
✓ kubeadm join on acme-w-1, acme-w-2, acme-w-3 (parallel)
✓ k8s apply (CNI, CSI, ingress, cert-manager, external-dns, metrics-server, kube-prometheus-stack)
✓ kubeconfig context 'acme' written to ~/.kube/config
acme cluster ready. Switch with: homelab k8s use-context acme$ homelab init --name acme --topology k8s-multi
✓ acquired instance 'acme' (subnet 192.168.60.0/24)
✓ wrote config-homelab.yaml
✓ wrote .vscode/settings.json with schema mappings
$ cd acme
$ homelab k8s create
✓ packer build (~12 min on cold cache)
✓ box add (local)
✓ vagrant up (4 VMs, ~3 min)
✓ DNS entries added (5 hostnames under acme.lab)
✓ TLS CA generated, wildcard issued
✓ kubeadm init on acme-cp-1
✓ kubeadm join on acme-w-1, acme-w-2, acme-w-3 (parallel)
✓ k8s apply (CNI, CSI, ingress, cert-manager, external-dns, metrics-server, kube-prometheus-stack)
✓ kubeconfig context 'acme' written to ~/.kube/config
acme cluster ready. Switch with: homelab k8s use-context acmeThe homelab init step calls IInstanceRegistry.AcquireAsync("acme", ...). The registry allocates the subnet, writes the entry, returns the scope. Every subsequent verb uses the scope.
What happens if you try to create a colliding instance
$ homelab init --name acme2 --topology k8s-multi
ERROR: subnet allocation failed: no free subnet in 192.168.{56..95} that does not collide with existing instances
acme 192.168.60.0/24 active
globex 192.168.61.0/24 active
personal 192.168.62.0/24 active
(the registry tried 192.168.63 first; it is free; allocating 'acme2' on 192.168.63.0/24)
✓ acquired instance 'acme2' (subnet 192.168.63.0/24)$ homelab init --name acme2 --topology k8s-multi
ERROR: subnet allocation failed: no free subnet in 192.168.{56..95} that does not collide with existing instances
acme 192.168.60.0/24 active
globex 192.168.61.0/24 active
personal 192.168.62.0/24 active
(the registry tried 192.168.63 first; it is free; allocating 'acme2' on 192.168.63.0/24)
✓ acquired instance 'acme2' (subnet 192.168.63.0/24)Actually, the registry does find a free subnet (192.168.63 is empty), so the allocation succeeds. The error in the example would only fire if all 40 reserved /24s were taken — a very unlikely scenario for a single workstation.
The real failure mode is not "subnet exhausted" but "the user picks a name that already exists":
$ homelab init --name acme --topology k8s-ha # acme already exists
ℹ instance 'acme' already exists; reusing existing scope (subnet 192.168.60.0/24)
ℹ note: existing topology is k8s-multi; configure topology in config-homelab.yaml
✓ ready$ homelab init --name acme --topology k8s-ha # acme already exists
ℹ instance 'acme' already exists; reusing existing scope (subnet 192.168.60.0/24)
ℹ note: existing topology is k8s-multi; configure topology in config-homelab.yaml
✓ readyThe acquisition is idempotent: re-running homelab init --name acme does not error, it just returns the existing scope. This is intentional — the user might re-run init to refresh the config without destroying the cluster.
What this gives you
The multi-instance pattern was already designed in homelab-docker. K8s.Dsl gets it for free. The cost of supporting multiple k8s clusters in parallel is zero new architectural code — only the K8s.Dsl-specific contributors that respect the IInstanceScope.
This is the value of building K8s.Dsl as a plugin instead of a fork. The fork would have to re-invent the registry, the subnet allocator, the kubeconfig juggling, the cert namespacing. The plugin inherits all of it.