Comparison and Vision
This is the closing chapter. The fair comparison against KubernetesClient/csharp, cdk8s, Pulumi.Kubernetes, and KSail. The phased delivery commitment (v0.1 → v0.5). The dev-side framing one more time. The vision: every typed system, eventually.
Comparison matrix
| Tool | Type source | Runtime dep | Builders | Multi-version | CRDs | Analyzers | Output |
|---|---|---|---|---|---|---|---|
KubernetesClient/csharp |
Hand-curated, one per K8s minor | jsii not required, but heavy HTTP client | Object initializers | One client per minor | Hand-written wrapper per CRD | None | Runtime HTTP calls to apiserver |
cdk8s (.NET binding) |
jsii-bundled types from a Node toolchain | jsii runtime + Node | CDK constructs | Bundled per cdk8s version | Yes (jsii-imported) | None | YAML files |
Pulumi.Kubernetes |
Schema-generated, runtime-resolved | Pulumi engine | Resource args | Per provider version | Yes (CRD2Pulumi) | None | Provisioned resources OR YAML |
KSail |
Hand-curated abstractions over KubernetesClient |
None | Fluent setup | Single K8s minor | None | None | Cluster bootstrapping |
Kubernetes.Dsl |
Schema-driven from native upstream YAML/JSON, regenerated on every build | None — pure C# | Builder.SourceGenerator.Lib fluent builders |
[SinceVersion]/[UntilVersion] per property across all minors AND all CRD bundle tags |
Same emitter ingests CRDs uniformly, 7 bundles supported out of the box | KUB001–KUB099 Roslyn diagnostics, including cross-resource validation |
YAML files checked into repo |
The honest distinction:
KubernetesClient/csharptargets the cluster as a runtime endpoint. You talk to a live apiserver. You read state, write state, watch resources. It's a runtime client.cdk8sandPulumi.Kubernetestarget the cloud as a runtime. You declare infrastructure in code, the engine reconciles. The intermediate YAML is an implementation detail.Kubernetes.Dsltargets the repo as a build artifact. You author manifests in C#, the build emits checked-in YAML, andkubectl apply -fis the deployment step. There is no runtime engine. The build is the runtime.
Same as how GitLab.Ci.Yaml targets .gitlab-ci.yml files, not GitLab's REST API. The unifying theme of this whole ecosystem is typed authoring of files that are then consumed by something else's runtime.
Why "repo as build artifact"
Three reasons.
1. Auditability
A YAML file checked into git has a complete history. Every change is a commit. Every commit has an author, a timestamp, a diff, a PR. Six months from now you can answer "why is replicas: 5?" by running git blame. With a runtime tool that reconciles infrastructure (cdk8s, Pulumi), you can answer the same question only by digging through the engine's state file or the cloud provider's audit log — both of which are less convenient and less complete.
2. Deployment portability
The team that runs Argo CD doesn't want a Pulumi engine in their reconciliation loop. The team that runs Flux wants pure YAML in git. The team that runs Helm wants templates. Checked-in YAML works with all of them — Argo CD reads it from git, Flux reads it from git, Helm wraps it in a chart. A repo-as-build-artifact tool composes with every existing GitOps workflow without imposing its own runtime.
3. Drift visibility
When the source of truth is a YAML file in git, drift is impossible to hide. A K8s minor bump changes the _sources.json SHA-256s. A CRD bundle bump changes the generated .g.cs files. A schema bump changes the [SinceVersion] annotations. Every drift produces a commit, which produces a PR, which gets reviewed.
When the source of truth is a Pulumi state file or a cdk8s synth output, drift is invisible until the runtime engine notices and decides what to do about it. This is a real engineering problem — the team that "shipped" a feature six months ago and never looked at it again has no idea their HPA is now broken because the K8s minor bumped under them.
The compiler does not sleep. The wiki does. The runtime engine sometimes wakes up at 3 a.m. and reconciles. None of those are as good as a typed build artifact in git.
Phased delivery commitment
| Slice | What it ships | Used by |
|---|---|---|
| v0.1 | Core API: core/v1, apps/v1, networking.k8s.io/v1. ~80 types. K8s 1.31 only. JSON-only schemas. |
Most teams (Pod/Service/Deployment/Ingress is 90% of usage) |
| v0.2 | + rbac/v1, batch/v1, autoscaling/v2, policy/v1. ~140 types total. |
RBAC, Job/CronJob, HPA, PDB |
| v0.3 | + 7 CRD bundles (Argo Rollouts, Prometheus Operator, KEDA, cert-manager, Gatekeeper, Istio, Litmus). ~290 types total. YAML schema input enabled. | Ops.Dsl Cloud-tier targets |
| v0.4 | Full core API (storage.k8s.io, coordination.k8s.io, events.k8s.io, discovery.k8s.io, etc.). ~600 types total. |
Parity with KubernetesClient/csharp |
| v0.5 | Multi-version [SinceVersion]/[UntilVersion] across both core (1.27 → 1.31) and CRD bundle tag histories. |
Cluster-version-aware codegen across both layers |
Each slice is self-contained: a v0.1 user gets a working subset and never has to wait for v0.5. A v0.5 user gets the full multi-version surface but pays the corresponding dotnet build time (Part 8). Every slice is opt-in via [KubernetesBundle].Groups and Crds.
When NOT to use Kubernetes.Dsl
Three honest "don't bother" cases:
- You operate a controller or operator. Use
KubernetesClient/csharp. Kubernetes.Dsl writes manifests; controllers read live cluster state. Different jobs. - You're already heavily invested in Pulumi or cdk8s. They work. Switching costs more than the type-safety gains, especially if your team is already comfortable with the runtime model.
- You want runtime infrastructure provisioning. Kubernetes.Dsl does not talk to cloud providers. It doesn't create VPCs, S3 buckets, or RDS instances. Use Terraform or Pulumi for that, then use Kubernetes.Dsl for the K8s manifests that run inside.
The right use case is: a team that authors Kubernetes manifests by hand or via Helm/Kustomize, deploys them via Argo CD / Flux / kubectl apply, and wants compile-time guarantees, multi-version awareness, and the Ops.Dsl bridge surface.
Coexistence with the existing ecosystem
Kubernetes.Dsl does not replace any of these. It composes with them.
| Tool | How it composes |
|---|---|
KubernetesClient/csharp |
Use it for runtime cluster reads, controllers, operators. Use Kubernetes.Dsl for the YAML you kubectl apply. |
| Helm | KubernetesYamlWriter.WriteHelmChart(bundle, "templates/") produces a Helm chart's templates/ directory. Helm wraps it. |
| Kustomize | KubernetesYamlWriter.WriteKustomizeBase(bundle, "base/") produces a Kustomize base. Overlays go on top. |
| Argo CD | Argo CD reads YAML from git. Kubernetes.Dsl writes YAML to git. They never meet. |
| Flux | Same. |
kubectl |
Track A wraps it. Bridge shim makes it accept typed objects. |
| Terraform/Pulumi | They provision the cloud. Kubernetes.Dsl provisions inside the cluster. Different layers. |
| Ops.Dsl ecosystem | Bridges (Part 13–14) emit typed Kubernetes.Dsl objects from Ops attributes. The 12 bridges replace the existing string-templated YAML in those sub-DSLs. |
The vision
The thesis of this whole ecosystem — Cmf, GitLab.Ci.Yaml, BinaryWrapper, Ops.Dsl, Kubernetes.Dsl — is that every typed authoring problem deserves the same treatment: an Attribute, a Source Generator, an Analyzer pack, a NuGet package. The cost of building the first one was high (Cmf and the M3 metamodel). The cost of every subsequent one is much lower because the patterns compose.
Kubernetes.Dsl is the seventh or eighth application of the pattern, depending on how you count. It reuses BinaryWrapper.Design.Lib for Track A. It reuses Builder.SourceGenerator.Lib for builders. It reuses the four-project layout from GitLab.Ci.Yaml. It reuses SchemaVersionMerger (adapted to operate on JsonNode). It reuses the Roslyn analyzer scaffolding from Contention over Convention. It contributes a small amount of new code (SchemaInputReader, OpenApiV3SchemaEmitter, CrdSchemaEmitter, KubernetesYamlReader/Writer, the analyzer pack, the type registry, the format dispatcher) and gets a 600-class typed K8s surface that doesn't exist anywhere else in the .NET ecosystem.
The next typed system in this ecosystem will reuse even more. The cost curve goes down forever.
The compiler does not sleep. The wiki does.
This is the project's whole motto. Every wiki page that describes "how to deploy this service" is a candidate for replacement by a typed system that the compiler enforces. Every operational concern that's currently a Slack thread, a pinned message, or a Confluence page is a candidate for [Attribute] declaration. Every YAML file that drifts silently for six months is a candidate for [SinceVersion] annotations and KUB020 warnings.
The Kubernetes.Dsl release is the K8s-shaped piece of that vision. The Ops.Dsl ecosystem is the operational-knowledge piece. The CMF M2 DSLs are the domain piece. The next pieces — Helm.Dsl, Argo.Dsl, Flux.Dsl, Terraform.Dsl, who knows — will follow the same recipe.
Acknowledgements (the reused infrastructure)
This series builds on:
BinaryWrapperfor Track A's recursive--helpintrospection (HelpScraper,CobraHelpParser,ProcessRunnerContainerRuntime)Builder.SourceGenerator.Libfor the pure-functionBuilderEmitter.Emit(BuilderEmitModel)that powers every fluent builderGitLab.Ci.Yamlfor the four-project layout pattern, theSchemaVersionMergerpattern, and the YAML reader/writer technique referenceYamlDotNet 16.3.0(centrally pinned) for parsing CRD YAML in the SG and for runtime YAML I/O- The CMF M3 metamodel for the meta-meta vocabulary that lets every DSL register itself uniformly
- The
Contention over Conventionseries for the analyzer scaffolding pattern
And on the upstream Kubernetes ecosystem:
- kubernetes/kubernetes for the OpenAPI v3 dumps
- argoproj/argo-rollouts, prometheus-operator/prometheus-operator, kedacore/keda, cert-manager/cert-manager, open-policy-agent/gatekeeper, istio/istio, litmuschaos/chaos-operator for the seven CRD bundles
- Cobra for the help format that
kubectland every other Go CLI uses
All upstreams are Apache-2.0; the downloader writes a LICENSE-NOTICES file with attribution.
End of series
Fifteen chapters. ~600 typed classes. Two independent design-time tracks. Twelve Ops.Dsl bridge surfaces. One thesis: manifests should be a generated artifact, not a written one. The repo is the build artifact. The compiler enforces the schema. The analyzers enforce the practices. The Ops.Dsl ecosystem feeds the typed surface. The runtime is just kubectl apply -f.
The series shipped before the code, the same way Ops.Dsl Ecosystem shipped before the code. The architecture is committed. What remains is execution.
The compiler does not sleep.
Previous: Part 14: Composition Walkthrough — One Service, All 12 K8s-Emitting Ops Sub-DSLs Series index: Kubernetes.Dsl: Schema-Driven Typed Manifests for .NET