Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

The Problem

Kubernetes runs on YAML. The YAML is wrong.

Not always wrong. Just often enough that every team eventually invents its own wrapper — Helm charts, Kustomize overlays, jsonnet templates, hand-rolled bash, a Python script that templates a Python script. Each wrapper makes a different subset of mistakes harder, and a different subset easier.

The thesis of this series is that none of those wrappers go far enough. The mistakes are typeable. The C# compiler should refuse to build a manifest that names a property that doesn't exist in your target cluster's K8s minor. The Roslyn analyzer should refuse to commit a Service whose selector matches no Pod template in the same compilation unit. The whole class of errors we're about to enumerate should die at dotnet build time, not at kubectl apply time, and not at 3 a.m. when the on-call gets paged.

Let's enumerate them.

Silent typos pass admission

Kubernetes is permissive about unknown fields by default. The API server's strict mode (--strict) is opt-in. Most teams don't run it. Even when they do, their CI pipelines don't.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: order-api
spec:
  replicas: 3
  selector:
    matchLabel:                  # typo: should be matchLabels
      app: order-api
  template:
    metadata:
      labels:
        app: order-api
    spec:
      contianers:                # typo: should be containers
        - name: api
          image: ghcr.io/acme/order-api:1.4.2

Both typos are silently dropped. The deployment "succeeds." Zero pods are created. The service has no endpoints. The pipeline is green. Production is empty.

The C# equivalent — new V1DeploymentSpec { MatchLabel = ... } — won't compile. The property doesn't exist on the type.

Version drift

Kubernetes ships a new minor every ~3 months. Each release adds, deprecates, and removes API surface. PodSecurityPolicy was deprecated in 1.21 and removed in 1.25. extensions/v1beta1 Ingress was removed in 1.22. The flowcontrol.apiserver.k8s.io/v1 graduation happened in 1.29. The autoscaling/v2 HPA.behavior block landed in 1.18.

There is no way to know which API surface is available in which K8s minor without reading release notes for every version your fleet runs. And teams that maintain a multi-cluster fleet usually run several minors at once — staging on 1.31, prod on 1.30, the legacy cluster on 1.27 because the upgrade is scheduled for next quarter.

The hand-written manifest doesn't know which cluster it's targeting. It just is.

The C# equivalent — a generated POCO with [SinceVersion("1.18")] and [UntilVersion("1.25")] annotations on every property, plus a [KubernetesBundle(TargetClusterCompatibility = "1.27")] declaration on the consuming assembly — turns version drift into compiler warnings.

CRD sprawl

The K8s ecosystem is mostly CRDs now. Argo Rollouts, Prometheus Operator, KEDA, cert-manager, Gatekeeper, Istio, Litmus, External Secrets, Cluster API, Knative, Kyverno, Crossplane. Each one ships a CustomResourceDefinition YAML with its own openAPIV3Schema. The schemas are detailed, evolving, and version-aware. They're also entirely opaque to your IDE.

You write argoproj.io/v1alpha1 Rollout manifests by copy-pasting from the Argo docs, hoping the field you copied wasn't renamed in the latest minor. A field that was strategy.canary.steps[].setWeight in v1.6 might still exist in v1.7 — or it might be deprecated, or removed, or moved to strategy.canary.weight. The doc page often shows only the latest. Your cluster might be on an older one.

The C# equivalent treats CRDs no differently from core types. Same [SinceVersion("argo-rollouts/v1.6.0")] annotations. Same Roslyn diagnostics. Same builders. The CRD sprawl becomes 150 more typed classes in the same namespace.

Helm and Kustomize don't fix this

Helm templates are not YAML — they're Go-templated text that emits YAML if you're lucky. The template engine doesn't know about K8s schemas. {{ .Values.image }} will happily render null and produce a Pod with image: null. Kustomize overlays apply strategic-merge patches that only validate after merging, in a separate pass that most CI pipelines skip. Both are improvements over raw YAML — neither is typed.

The truth is that every K8s configuration tool today picks two of three: typed, diff-friendly, runtime-flexible. Helm picks runtime-flexible + diff-friendly (no types). Kustomize picks runtime-flexible + diff-friendly (no types). Pulumi picks typed + runtime-flexible (no diff-friendly checked-in YAML). cdk8s picks typed + runtime-flexible (no diff-friendly checked-in YAML, and a JSII runtime).

Kubernetes.Dsl picks typed + diff-friendly and gives up runtime flexibility on purpose. The output is .yaml files checked into git. You apply them with kubectl apply -f like any other YAML. The build is the runtime.

"But KubernetesClient/csharp already exists"

It does. It is excellent. It is not what Kubernetes.Dsl replaces.

The official KubernetesClient/csharp library is a runtime client for talking to a live apiserver. You construct V1Pod objects, hand them to a Kubernetes client instance, and the client makes HTTP calls. The types are hand-curated. They ship one version per K8s minor. You pin one version per project. There are no compile-time analyzers for missing resource limits or cross-resource label-selector mismatches. There is no way to author manifests for multiple K8s minors from the same code.

KubernetesClient Kubernetes.Dsl
When At runtime, against a live cluster At build time, into a checked-in .yaml file
Who Operators, controllers, CI bots Developers, source generators, Ops.Dsl bridges
Versioning One client per K8s minor [SinceVersion]/[UntilVersion] per property
CRDs Hand-written wrapper per CRD Same emitter ingests every CRD uniformly
Analyzers None KUB001KUB099

You use both. KubernetesClient reads the live cluster (operators, reconcilers, kubectl plugins). Kubernetes.Dsl writes the YAML that the cluster reads (manifests, Helm templates, Kustomize bases). They share zero code and they should — they have different jobs.

This series will keep returning to the distinction in Part 2, Part 10, and Part 15. It's the most common source of confusion and it deserves repetition.

Drift over time

A maintained .yaml file rots. The K8s minor under it moves. The CRD bundle versions move. The team that wrote it leaves. The wiki page that explained the deployment topology was last updated in 2023 and references a service that was renamed in 2024. The Helm chart is on v0.4.7 and the Argo Rollout it depends on bumped from v1.5 to v1.7 last quarter, and nobody noticed because nothing failed loudly.

Generated .g.cs files don't rot. They get regenerated on every build from the schemas in schemas/. Bumping a K8s minor is one CLI command. Bumping a CRD bundle tag is one CLI command. The diff lands in a PR. The analyzer flags every property that disappeared. The compiler flags every type that no longer exists. There is no quiet drift — only loud breakage that tells you exactly what to fix.

This is the real argument for typed manifests. Not "typos at write time" (Helm and Kustomize partially help with that). Not "version awareness in the IDE" (autocomplete is nice but not life-changing). The argument is drift over time. A manifest that was correct three years ago should still be correct today, or it should fail loudly enough that someone fixes it. Generated artifacts give you that for free. Hand-written ones never do.

What we're going to build

The next 14 chapters build it.

  • A Roslyn incremental source generator that ingests checked-in OpenAPI v3 dumps (core K8s) and CRD YAML bundles (in their native upstream format), parses them, merges across minors and CRD tags, and emits ~600 typed POCOs + builders.
  • A typed kubectl client generated from kubectl --help via BinaryWrapper, with no apiserver involvement.
  • A K8s YAML reader/writer that handles multi-doc streams, the apiVersion/kind discriminator, status omission, and the special types (IntOrString, Quantity, Duration, RawExtension).
  • A Roslyn analyzer pack with diagnostics KUB001KUB099 covering required fields, deprecated apiVersions, best-practice warnings, and cross-resource validation.
  • A bridge from every K8s-emitting Ops.Dsl sub-DSL into typed Kubernetes.Dsl objects.

Reused from the existing FrenchExDev ecosystem: the BinaryWrapper infrastructure (recursive --help, Cobra parser, container runtime), Builder.SourceGenerator.Lib (the pure-function emitter library), the four-project layout pattern from GitLab.Ci.Yaml, the SchemaVersionMerger pattern, and YamlDotNet for parsing CRD YAML into the same JsonNode shape as the core OpenAPI dumps.

New code: an OpenApiV3SchemaEmitter, a CrdSchemaEmitter, the schemas downloader, the K8s YAML reader/writer, the analyzer pack, the type registry generator, the format dispatcher.

In numbers: ~35 lines of new SG dispatcher code, ~30 lines of deleted normalization code in the downloader, one new SG package reference (YamlDotNet), and a 600-class typed surface that doesn't exist anywhere in the .NET ecosystem today.

The next chapter sketches the architecture.


Next: Part 2: High-Level Architecture — Two Tracks, Eight Projects

⬇ Download