Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Part 50: Conclusion and Roadmap

"The freelancer with 128 GB of RAM should be able to host three real Kubernetes clusters on her workstation, one per client, with no overlap. We have just designed the plugin that makes it real."


What we built

Forty-nine parts. Fifty counting this one. One typed K8s.Dsl plugin to HomeLab, sized for real dev work, multi-client by construction, dogfood-aware, plugin-shaped, ArgoCD-driven.

The recap, in one paragraph per Act:

Act I — The case for real dev k8s (parts 01–05): kind/minikube/k3d are simulations that lie about CNI, CSI, ingress, multi-node, and upgrades. Real Kubernetes on Vagrant VMs costs 16-48 GB of RAM and catches the bug classes the simulations miss. 64 GB is enough for one HA cluster; 128 GB is enough for three clusters in parallel. The freelancer's killer feature is multi-client isolation, achieved structurally via the HomeLab instance registry from homelab-docker Part 51. The two distributions K8s.Dsl ships are kubeadm and k3s.

Act II — The K8s.Dsl plugin architecture (parts 06–11): ~12 [MetaConcept] types in a new Ops.Cluster sub-DSL. Two parallel role contracts (IK8sManifestContributor and IHelmReleaseContributor). A topology resolver that extends the existing homelab-docker resolver with three new k8s topologies. An IKubeconfigStore plugin contract with merged and isolated implementations. Secrets bridging from ISecretStore via two paths (build-time and External Secrets Operator). The plugin ships as two NuGets, mirroring HomeLab's own thin-CLI/fat-lib split, and the CLI command builder discovers verb groups in plugin assemblies via IHomeLabVerbGroup and [VerbGroup("k8s")]. The user sees homelab k8s ... as a sibling of vos, compose, tls with no plugin marker.

Act III — Building real k8s on Vagrant VMs (parts 12–21): kubeadm and k3s as parallel IClusterDistribution implementations. A new Packer.Alpine.K8sNode contributor with kernel modules, sysctl tweaks, containerd, and the right binaries. The kubeadm bootstrap as a KubeadmInitSaga with compensation. The worker join as a KubeadmJoinSaga with parallel-with-cap. CNI choice (Flannel/Calico/Cilium, default Cilium). CSI choice (local-path/Longhorn/OpenEBS, default Longhorn for multi-node). Ingress choice (nginx-ingress/Traefik). cert-manager with the HomeLab CA as a ClusterIssuer. external-dns bridged to HomeLab's IDnsProvider via a custom webhook provider. metrics-server and kube-state-metrics as the basics.

Act IV — Topologies (parts 22–25): k8s-single (~16 GB, one VM, daily driver). k8s-multi (~32 GB, 1 cp + 3 workers, the realistic default). k8s-ha (~48 GB, 3 cp + 3 workers, the upgrade-rehearsal topology). The decision matrix and tree.

Act V — The Client (parts 26–34): Acme (the .NET shop on k8s-multi) and Globex (the Spring Boot shop on k8s-ha) as the two demo clients. Three namespaces per cluster (dev, stage, prod) with resource quotas and default-deny network policies. GitLab via the official Helm chart. Postgres via CloudNativePG operator. MinIO via the MinIO operator. kube-prometheus-stack for observability. Velero for backup with weekly restore tests. ArgoCD as the canonical deployment mechanism, plus first-class HomeLab tooling that generates the GitOps repository structure via homelab k8s argocd init / add-app / add-env. A real workload deployment (Acme's .NET API) end to end: git push, runner builds image, runner updates GitOps repo, ArgoCD reconciles, the new pods come up, cert-manager has issued the cert, the URL is reachable. ~90 seconds total.

Act VI — Multi-client isolation (parts 35–39): One HomeLab instance per client. The 128 GB workstation that runs two k8s-multi clusters plus one k8s-single cluster plus the host OS with 16 GB to spare. Kubeconfig juggling via homelab k8s use-context plus a marker file at ~/.homelab/active-context that shell prompts read. Cross-client networking that does not exist (subnet isolation, no routes between clients, an audit test that proves Acme cannot reach Globex). The pro consultant workflow with three clients in parallel and zero context-switch tax.

Act VII — Day-2 (parts 40–44): kubeadm upgrades wrapped in a KubeadmUpgradeSaga with compensation. k3s upgrades as the easy alternative. Velero restore tests in two flavors (in-cluster CronJob and cross-cluster ephemeral instance). Cluster recreation from declarative state in ~30 minutes. The decision tree of when to fix in place vs when to nuke and rebuild.

Act VIII — Real-world cases (parts 45–48): Spring Boot microservices for Globex with Strimzi Kafka. .NET API + SignalR + Postgres for Acme with Redis backplane and sticky-session ingress. Airflow data pipeline with KubernetesExecutor. GPU ML training with NVIDIA device plugin and PyTorch.

Act IX — Closing (parts 49–50): The honest catalog of what's missing (Windows, ARM, federation, service mesh, multi-tenant, GPU sharing, FPGAs, off-host backup) and this conclusion.


The implementation phases

When somebody actually builds K8s.Dsl, the order is:

Phase 1: Lib coreKubernetes.Bundle + the binary wrappers (KubectlClient, HelmClient, KubeadmClient, K3sClient). Source-generated, tested in isolation. ~1500 lines of code.

Phase 2: Cluster distributionsKubeadmClusterDistribution and K3sClusterDistribution. The bootstrap and join sagas. Tested with ScriptedVosBackend fakes. ~800 lines.

Phase 3: ContributorsIK8sManifestContributor and IHelmReleaseContributor shipped via the existing pattern from homelab-docker Part 32. The 14 standard contributors (CNI, CSI, ingress, cert-manager, external-dns, metrics-server, kube-state-metrics, observability, ArgoCD, GitLab, CNPG, MinIO, kube-prometheus-stack, Velero). ~2000 lines.

Phase 4: CLI plugin surfaceIHomeLabVerbGroup, [VerbGroup], the command builder extension in HomeLab core (small change), the K8sDsl.Cli NuGet with the 14 verb shells. ~600 lines split across two NuGets.

Phase 5: GitOps repo generatorIGitOpsRepoGenerator, the App-of-Apps layout, the homelab k8s argocd init/add-app/add-env verbs. ~700 lines.

Phase 6: Day-2 — The upgrade sagas, the restore-test verb, the recreate-from-declarative-state saga. ~600 lines.

Phase 7: E2E validation — Run the eight-command bring-up sequence against real Vagrant + VirtualBox + Cilium + Longhorn + nginx-ingress + cert-manager + external-dns + metrics-server + kube-prometheus-stack + Velero + ArgoCD. Verify the dogfood loop closes (Acme cluster builds Acme's app, deploys via ArgoCD, the URL is reachable).

Total: approximately 6,200 lines of plugin code plus the existing HomeLab toolbelt and core. Implementable in two to three months by one engineer who already knows the substrate.


The 8-command bring-up sequence

# Once per machine
$ homelab init --name acme --topology multi
$ cd acme

# 8 commands to a working k8s cluster with workloads
$ homelab packer build                        # 1: build the K8s node Packer image (~12 min on cold cache)
$ homelab box add --local                     # 2: register the box
$ homelab k8s create                          # 3: provision VMs + bootstrap + join workers + apply core stack (~18 min)
$ homelab dns add api.acme.lab 192.168.60.21  # 4: DNS for the wildcard
$ homelab tls init && homelab tls install     # 5: TLS CA + wildcard cert
$ homelab tls trust                           # 6: enrol CA in OS trust store
$ homelab k8s argocd init devlab-gitops       # 7: create the GitOps repo in DevLab
$ homelab k8s argocd add-app acme-api ...     # 8: add the workload, ArgoCD deploys it

Eight verbs. About 30 minutes from homelab init to a working cluster with a workload reachable at https://api.acme.lab over HTTPS. Faster on subsequent recreations because the box is cached.


The dogfood loops in summary

K8s.Dsl extends homelab-docker's five dogfood loops with two more:

Loop What it does
6: Helm charts K8s.Dsl plugin ships Helm chart templates; CI in DevLab publishes them to a chart museum in DevLab
7: GitOps K8s.Dsl generates the GitOps repository structure; the repo lives in DevLab's GitLab; ArgoCD inside the cluster watches it; workloads deploy

Plus the loops from homelab-docker continue to apply: GitLab in DevLab hosts K8s.Dsl's source, the runner builds it, baget publishes the NuGet, the box registry hosts the K8s node images, Velero backups protect the cluster's state, the docs site (this blog) is hosted by DevLab.

Seven loops. Every meaningful artifact in K8s.Dsl is produced by infrastructure HomeLab provisioned. The ouroboros is intact.


The call to action

Three things you can do after reading this series:

1. Read homelab-docker first if you haven't

This series is a plugin on top of homelab-docker. If you have not read the substrate, the plugin will feel mysterious in places. Read at least these parts of homelab-docker:

That is enough background to understand every part of this series in context.

2. Implement K8s.Dsl, or contribute to the implementation

The plan in this series is implementable. The phases above are real. If you are a C# developer who knows Kubernetes and wants a project, the K8s.Dsl plugin is sized for two to three months of focused work.

The phases are independent enough that two or three engineers can work in parallel: Phase 1 (lib core) and Phase 2 (distributions) can happen simultaneously; Phase 3 (contributors) and Phase 4 (CLI plugin surface) can happen simultaneously; Phase 5 (GitOps generator) and Phase 6 (day-2) can happen simultaneously. The bottleneck is Phase 7 (E2E validation), which needs the rest to be done.

3. Ship one of the missing plugins

Pick one of the eight items from Part 49 and ship it as a separate NuGet:

  • FrenchExDev.HomeLab.Plugin.K8sDsl.K0s — k0s as the third distribution
  • FrenchExDev.HomeLab.Plugin.K8sDsl.Talos — Talos as a fourth distribution
  • FrenchExDev.HomeLab.Plugin.K8sDsl.Cilium.ClusterMesh — federation via Cilium
  • FrenchExDev.HomeLab.Plugin.K8sDsl.Linkerd — service mesh
  • FrenchExDev.HomeLab.Plugin.K8sDsl.OpenEBS.Mayastor — high-perf NVMe storage
  • FrenchExDev.HomeLab.Plugin.K8sDsl.Arm64 — ARM nodes
  • FrenchExDev.HomeLab.Plugin.K8sDsl.Windows — Windows worker nodes

Each one is the same pattern: a plugin manifest, a few [Injectable] services, the existing role contracts, NuGet publishing. The barrier to entry is small. The audience for each plugin is whichever team has that specific need.


A final thought

The real value of K8s.Dsl is not the Kubernetes parts. It is the plugin pattern. Every line of K8s.Dsl is implemented as a plugin to HomeLab. There are no forks. There are no monkey patches. There are no special-cases in HomeLab core for "and now we also do k8s". The k8s story is one NuGet that the user installs and one config field they flip. The architecture says yes.

This is the real proof that the HomeLab plugin system from homelab-docker Part 10 was the right design. K8s — the most complex single thing a homelab might host — fits inside it. Anything smaller (Talos, k0s, Microk8s, a custom CRD operator, a private DSL for a team's domain) fits inside it too.

If you ship a plugin to HomeLab and your plugin works invisibly alongside the core verbs, the architecture has done its job. You spent your time on the domain logic; the plumbing was already there.

That is the bet of both series: build the plumbing once, share it across every domain. K8s is one domain. The next domain — whatever it is — is one NuGet away.



Thank you for reading. Now go build something — or ship a plugin.

⬇ Download