Why
K8s.Dsl ships kubeadm and k3s, three topologies, the toolbelt-mandated architecture, the contributor patterns, the CLI plugin surface, the GitOps repo generator, the ArgoCD integration, the multi-client isolation, the day-2 operations, the four real-world cases. By any reasonable measure, K8s.Dsl v1 is substantial.
But it does not do everything. There are concerns we deliberately deferred — not because they are unimportant, but because they would have ballooned the v1 scope. This part is the honest catalog.
The thesis: eight items are explicitly out of scope for K8s.Dsl v1. Each is a clear plugin opportunity for someone who wants to ship it. The roadmap is honest about which gaps exist and which the architecture is open to.
1. Windows Server containers / Windows nodes
What's missing: K8s.Dsl assumes every node is Linux (Alpine via the K8sNode Packer image). There is no support for joining a Windows Server worker node to a kubeadm cluster, and no Windows base image.
Why deferred: Windows nodes in Kubernetes work, but the audience is small (mostly enterprise migrations from Windows to k8s) and the operational story is significantly different (Windows containers cannot run Linux images, the host networking is different, the storage drivers are different). The complexity multiplier is high; the value to the freelancer audience is low.
What it would take: A new Packer.Windows.K8sNode contributor that bakes a Windows Server 2022 image with containerd (Windows version) and the Windows-specific kubelet. A new topology variant k8s-multi-mixed that has Linux control planes and a Windows worker. The IK8sManifestContributor pattern works as-is because Kubernetes manifests are platform-agnostic.
Plugin opportunity: FrenchExDev.HomeLab.Plugin.K8sDsl.Windows.
2. ARM nodes
What's missing: Same shape as Windows. K8s.Dsl assumes x86_64. ARM (Apple Silicon, Ampere, Raspberry Pi) is not supported in v1.
Why deferred: ARM works in real Kubernetes via the arm64 container manifests that most images now ship. The blocker is the Vagrant story: VirtualBox does not run on ARM macOS, Parallels does, libvirt+KVM does on ARM Linux. Each provider needs its own ARM Packer base image. The matrix grows quickly.
What it would take: A Packer.Alpine.K8sNode.Arm64 contributor with aarch64 URLs and the right Vagrant provider defaults. A pre-pull check that verifies every container image has a matching arm64 manifest.
Plugin opportunity: FrenchExDev.HomeLab.Plugin.K8sDsl.Arm64.
3. Federated multi-cluster
What's missing: K8s.Dsl supports multiple clusters on one workstation, isolated from each other. It does not support clusters that cooperate — sharing services across clusters, replicating data, federating identity, presenting a single API surface across multiple clusters.
Why deferred: Federation is genuinely hard. KubeFed is dead. The current candidates (Karmada, Open Cluster Management, Cilium ClusterMesh) are all complex enough that supporting one of them properly is its own multi-month project.
What it would take: A new IFederationProvider plugin contract with implementations for Karmada or OCM. Plus a federation-aware variant of the topology resolver, the kubeconfig store (multi-context federation), and the GitOps repo generator (multi-cluster Application sets).
Plugin opportunity: FrenchExDev.HomeLab.Plugin.K8sDsl.Federation.
4. Service mesh (Linkerd, Istio)
What's missing: K8s.Dsl uses bare Kubernetes services and Ingress. There is no service mesh sidecar injection, no mTLS between services, no traffic management beyond what the Ingress controller provides.
Why deferred: Service meshes are heavy. A real Istio install on a k8s-multi topology adds ~2 GB of RAM overhead. For most dev workloads the overhead is not justified. For users who need to test mTLS in dev (because production has it), the answer is a plugin.
What it would take: An IServiceMeshContributor plugin contract with implementations for Linkerd (the lightweight option) and Istio (the heavy option). The contributor injects sidecars into compose services, manages the service identity store, and rotates mTLS certs via the existing Tls library.
Plugin opportunity: FrenchExDev.HomeLab.Plugin.K8sDsl.ServiceMesh.
5. Multi-tenant SaaS-style isolation (vs the per-instance pattern we chose)
What's missing: K8s.Dsl isolates tenants by giving each one a separate cluster (one HomeLab instance per client). It does not support multi-tenancy within a cluster — namespaces with hard quotas, network policies, RBAC isolation that pretends multiple tenants share infrastructure.
Why deferred: We chose per-instance isolation because it is structural (the architecture refuses to let two clients touch each other) instead of policy-enforced (which depends on the CNI honouring the network policy correctly). The SaaS-style multi-tenancy is harder to do safely and the freelancer use case does not need it.
What it would take: A MultiTenantNamespaceContributor that bundles all the cross-cutting RBAC, network policy, quota, and pod security admission rules per tenant. Plus a homelab k8s tenant create verb. Plus an architecture test that asserts no two tenants share resources.
Plugin opportunity: FrenchExDev.HomeLab.Plugin.K8sDsl.MultiTenant. There is no v1 plan to ship this; the per-instance pattern covers the freelancer use case completely.
6. GPU sharing (vGPU, MIG)
What's missing: Part 48 covered single-GPU passthrough to one node. It does not cover sharing a GPU across multiple workloads via NVIDIA's MIG (Multi-Instance GPU) or vGPU technologies.
Why deferred: MIG works on data-center GPUs (A100, H100). vGPU requires NVIDIA's commercial license. Neither applies to consumer GPUs (RTX 4090, etc.) which are what the freelancer audience has.
What it would take: A plugin that detects MIG-capable GPUs at boot, configures the MIG profiles, and exposes them via the device plugin's nvidia.com/mig-1g.5gb-style resources.
Plugin opportunity: FrenchExDev.HomeLab.Plugin.K8sDsl.GpuSharing.
7. FPGAs and custom accelerators
What's missing: Same shape as GPUs but for FPGAs (Xilinx, Intel/Altera) and custom accelerators (Google Coral, Habana Gaudi, etc.).
Why deferred: Audience is tiny. Most freelancers do not have FPGA boards on their workstations.
What it would take: A plugin per accelerator with the right Vagrant passthrough config and the right device plugin DaemonSet.
Plugin opportunity: One per accelerator vendor. K8s.Dsl core ships none.
8. Multi-region disaster recovery testing
What's missing: Velero restore tests run on the same workstation (a fresh ephemeral instance). They do not exercise the "the workstation died" scenario where you need to restore on a different machine entirely.
Why deferred: HomeLab K8s does not have a "ship the backups to another machine" story. The MinIO bucket lives on the workstation; if the workstation dies, the bucket is gone too.
What it would take: An off-host backup destination (S3, Backblaze B2, a NAS, a colleague's HomeLab), plus a periodic sync from the local MinIO to the off-host destination, plus a restore-test that runs on the off-host destination's contents.
Plugin opportunity: FrenchExDev.HomeLab.Plugin.K8sDsl.OffHostBackup. Not in core because it depends on out-of-host infrastructure that is not universal.
What this gives you
A clear v1 scope. The user knows what they are getting. The reader who needs one of the eight missing things knows it is missing and knows what shape the plugin would take. The community can ship the plugins; the K8s.Dsl core does not need to.