Kubernetes.Dsl: Schema-Driven Typed Manifests for .NET
Kubernetes manifests should be a generated artifact, not a written one. The Kubernetes OpenAPI spec is the source of truth — pull it through the same wrapper pipeline that produced GitLab.Ci.Yaml, get the same payoff (zero reflection, multi-version awareness, fluent builders, YAML round-trip), and let every Ops.Dsl sub-DSL emit into that typed surface instead of string-templating YAML. Manifests get checked into the repo, not pushed at runtime — Kubernetes.Dsl is a dev-side tool, not a cluster client.
Kubernetes.Dsl is a schema-driven code generation library that ingests checked-in OpenAPI v3 dumps (core K8s) and CRD YAML bundles (Argo Rollouts, Prometheus Operator, KEDA, cert-manager, Gatekeeper, Istio, Litmus) at compile time using a Roslyn incremental source generator, and emits ~600 typed C# models, fluent builders, and version metadata — all from a single [KubernetesBundle] attribute. Hand-written infrastructure handles K8s YAML serialization, multi-doc bundles, the typed kubectl client (via BinaryWrapper), and a Roslyn analyzer pack with diagnostics KUB001–KUB099.
The result: zero runtime reflection, full IntelliSense, compile-time safety, and multi-version awareness across both core K8s minors (1.27 → 1.31) and CRD bundle tag histories (e.g., argo-rollouts v1.5 → v1.7.2).
Positioning vs KubernetesClient/csharp
Reader's likely first question: "Doesn't KubernetesClient/csharp already exist?" Yes — and it is not what Kubernetes.Dsl replaces. The two are complementary:
KubernetesClient |
Kubernetes.Dsl |
|
|---|---|---|
| Type source | Hand-curated, generated once per release | Schema-driven, regenerated on every build, multi-version merged |
| Target audience | Runtime operators (controllers, operators, CI bots) | Dev-side manifest authors, source generators, Ops.Dsl bridges |
| Output | HTTP calls to apiserver | .g.cs POCOs + builders + .yaml files checked into git |
| Versioning | One client per K8s minor | [SinceVersion]/[UntilVersion] on every property across all minors |
| Builders | Object initializers | Fluent WithXxx(...) chains with Result<T> validation |
| CRDs | Hand-written wrappers per CRD | Same schema reader ingests CRDs uniformly |
| Compile-time analyzers | None | KUB001–KUB099 Roslyn diagnostics |
| Use it for | Talking to a live cluster | Generating the YAML you kubectl apply -f |
You use both. KubernetesClient reads cluster state. Kubernetes.Dsl writes the manifests you apply.
Two independent design-time tracks
There are two separate tracks, neither of which needs a running Kubernetes cluster:
Track A — Typed kubectl CLI (BinaryWrapper, no cluster, no schemas)
Track A is pure CLI introspection. The container hosts the kubectl binary at a pinned version. BinaryWrapper recursively probes --help to build the command tree. No apiserver. No cluster. No schemas.
Track B — Typed POCOs and builders (checked-in OpenAPI dumps + CRD YAML)
Track B has no cluster anywhere in the loop — not even a containerized one. Schemas are checked-in files in their native upstream format: core K8s OpenAPI as .json, CRD bundles as .yaml. A small refresh CLI (Kubernetes.Dsl.Schemas.Downloader) fetches new versions on demand; that's the only thing that ever talks to the network.
Red = new code. Green = reused as-is from the FrenchExDev ecosystem.
Part 1: The Problem — YAML Manifests, Drift, and the Untyped Cluster
Why hand-writing K8s YAML is a minefield — silent typos that pass admission, version drift across K8s minors, CRD sprawl, the cdk8s/Pulumi positioning gap, and why the official KubernetesClient/csharp is not what Kubernetes.Dsl replaces.
Part 2: High-Level Architecture — Two Tracks, Eight Projects
The four-project pattern adapted to K8s, the two independent tracks (BinaryWrapper kubectl + OpenAPI/CRD ingestion), the eight-project layout, the [KubernetesBundle] attribute, and how schemas opt in via <AdditionalFiles>.
Part 3: Schema Acquisition — Where Schemas Come From
The Kubernetes.Dsl.Schemas.Downloader CLI + library. Core K8s OpenAPI v3 from kubernetes/kubernetes. CRD YAML from upstream repos. SHA-256 pinning, license attribution, the _sources.json shape. Refresh workflow.
Part 4: SchemaInputReader — Parsing YAML and JSON Into One Tree
The format dispatcher. System.Text.Json for .json, YamlDotNet for .yaml/.yml, both producing JsonNode. Why the rest of the SG never knows the original format. The CRD envelope walker.
Part 5: Multi-Version Schema Merging — Across Both Core and CRDs
Reusing SchemaVersionMerger. Core K8s 1.27 → 1.31. CRD bundle tag histories (argo-rollouts/v1.5.0 → v1.7.2). Per-property [SinceVersion]/[UntilVersion]. Bundle-prefixed identifiers. The [StorageVersion] flag.
Part 6: Code Emission and Special Types
OpenApiV3SchemaEmitter + BuilderEmitter. Naming conventions (V1Pod, V1PodBuilder). Required fields → non-nullable. The Kubernetes vendor extensions (x-kubernetes-*). Special types: IntOrString, Quantity, Duration, RawExtension. Discriminated unions for oneOf.
Part 7: K8s YAML Serialization — Multi-Doc, Discriminator, Round-Trip
The hand-written KubernetesYamlReader/Writer. Multi-doc streams. apiVersion/kind discriminator dispatch via the generated type registry. Status omission on write. IntOrString/Quantity converters. The round-trip fidelity caveat.
Part 8: Incremental Generator Performance — Surviving 600 Types × 5 Versions
The elephant in the room. ~3000 type-versions vs GitLab's 330. API-group filtering at the attribute level. Per-type AddSource deduplication. UnifiedSchema caching keyed by SHA-256. The phased delivery slices (v0.1 → v0.5).
Part 9: CRDs as First-Class Citizens
The CrdSchemaEmitter. Multi-tag CRD ingestion. Argo Rollouts, Prometheus Operator, KEDA, cert-manager, Gatekeeper, Istio, Litmus. The seven supported CRD bundles. In-house CRDs in schemas/crds/local/. Same [SinceVersion]/[UntilVersion] model as core types.
Part 10: kubectl as a BinaryWrapper Target
Track A end-to-end. The recursive --help scrape. Cobra parser reuse. Container runtime. The generated KubectlClient. Server-side apply, field manager, strategic merge patches. The KubectlClientExtensions shim that bridges the two tracks. kubectl plugin support.
Part 11: Roslyn Analyzers — KUB001 through KUB099
The diagnostic catalog. oneOf violations. Missing required fields. Deprecated apiVersions. Best-practice warnings (no resource limits, no liveness probe, latest image tag). Cross-resource validation (Service selector matches no Pod template). CRD-specific rules. Layering vs Ops.Dsl analyzers.
Part 12: Contributors, Bundles, and Helm/Kustomize Interop
IKubernetesContributor for modular manifest assembly. KubernetesBundleBuilder. Multi-doc YAML emission. Helm chart layout (templates/). Kustomize base layout (base/). Why Kubernetes.Dsl coexists with Helm and Kustomize rather than replacing them.
Part 13: Ops.Deployment Bridge — From Attributes to Typed Manifests
The bridge pattern. [Deployment] + [DeploymentApp] + [DeploymentDependency] source attributes → generated OrderServiceOpsDeployment.g.cs that emits typed V1Deployment + V1Service + V1Alpha1Rollout objects. The two-stage SG pattern from Ddd.Entity.Dsl. Round-trip caveat.
Part 14: Composition Walkthrough — One Service, All 12 K8s-Emitting Ops Sub-DSLs
OrderServiceV3 decorated with attributes from all 12 K8s-emitting Ops sub-DSLs. ~28 generated manifests. The complete bridge surface verified against Ops.Dsl chapter bodies. Analyzer report at the end.
Part 15: Comparison and Vision
Fair comparison vs KubernetesClient/csharp, cdk8s, Pulumi.Kubernetes, KSail. Phased delivery (v0.1 → v0.5). The dev-side framing. The repo-as-build-artifact thesis. Why the compiler does not sleep, and the wiki does.
Mapping to the Ops.Dsl ecosystem (verified)
Kubernetes.Dsl is the typed cloud-tier output target for 12 of the 22 Ops sub-DSLs. Verified against the chapter bodies of Ops.Dsl Ecosystem:
| Ops sub-DSL | Source attributes | K8s manifests | Kubernetes.Dsl typed surface |
|---|---|---|---|
| Ops.Deployment | [DeploymentOrchestrator], [DeploymentApp], [DeploymentDependency], [DeploymentGate] |
apps/v1 Deployment, v1 Service, argoproj.io/v1alpha1 Rollout |
V1Deployment, V1Service, V1Alpha1Rollout |
| Ops.Migration | [MigrationStep], [ExeMigration], [MigrationValidation] |
batch/v1 Job |
V1Job |
| Ops.Observability | [HealthCheck], [Metric], [AlertRule], [Dashboard] |
monitoring.coreos.com/v1 ServiceMonitor, PrometheusRule |
V1ServiceMonitor (CRD), V1PrometheusRule (CRD) |
| Ops.Configuration | [ConfigTransform], [Secret], [EnvironmentMatrix] |
v1 ConfigMap, external-secrets.io/v1beta1 ExternalSecret |
V1ConfigMap, V1Beta1ExternalSecret (CRD) |
| Ops.Resilience | [CanaryStrategy], [CircuitBreaker], [RetryPolicy] |
argoproj.io/v1alpha1 Rollout, AnalysisTemplate |
V1Alpha1Rollout, V1Alpha1AnalysisTemplate |
| Ops.Chaos | [ChaosExperiment], [FaultInjection], [SteadyStateProbe] |
litmuschaos.io/v1alpha1 ChaosEngine, ChaosSchedule |
V1Alpha1ChaosEngine, V1Alpha1ChaosSchedule |
| Ops.Security | [SecurityPolicy], [RbacRule], [AuditPolicy] |
v1 ServiceAccount, rbac.authorization.k8s.io/v1 Role, RoleBinding |
V1ServiceAccount, V1Role, V1RoleBinding |
| Ops.Infrastructure | [ContainerSpec], [StorageSpec], [CertificateSpec], [DnsRecord] |
apps/v1 Deployment, cert-manager.io/v1 Certificate, ClusterIssuer, networking.k8s.io/v1 Ingress |
V1Deployment, V1Certificate (CRD), V1ClusterIssuer (CRD), V1Ingress |
| Ops.Networking | [IngressRule], [MtlsPolicy], [NetworkPolicy], [EgressRule] |
NetworkPolicy, Istio PeerAuthentication, AuthorizationPolicy, Ingress |
V1NetworkPolicy, V1Beta1PeerAuthentication (CRD), V1Beta1AuthorizationPolicy (CRD), V1Ingress |
| Ops.DataGovernance | [BackupPolicy], [RetentionPolicy], [GdprDataMap], [SeedData] |
batch/v1 CronJob, Job |
V1CronJob, V1Job |
| Ops.Compliance + SupplyChain | [ComplianceFramework], [ComplianceControl], [DataResidency] |
templates.gatekeeper.sh/v1 ConstraintTemplate, Constraint |
V1ConstraintTemplate (CRD), V1Beta1Constraint (CRD) |
| Ops.Capacity | [AutoscaleRule], [ThrottlePolicy], [ScaleToZero] |
HPA, VPA, KEDA ScaledObject, ConfigMap |
V2HorizontalPodAutoscaler, V1VerticalPodAutoscaler (CRD), V1Alpha1ScaledObject (CRD), V1ConfigMap |
The punchline: every Ops sub-DSL stops string-templating YAML and starts constructing typed Kubernetes.Dsl objects. One central serializer. One central CRD registry. One central versioning story. One central analyzer surface.
How to read this series
| You are a... | Read in this order |
|---|---|
| Architect | Parts 1–2, 13–15 |
| Developer authoring manifests | Parts 1–2, 6–7, 9, 12 |
| SRE / Platform engineer | Parts 1, 7, 9–10, 12, 14 |
| Bridge author (Ops.Dsl → K8s) | Parts 5–6, 13–14 |
| Roslyn / SG enthusiast | Parts 4–6, 8, 11 |
Prerequisites
- Familiarity with C# attributes and source generators
- Basic understanding of Roslyn analyzers (see Contention over Convention)
- Understanding of the M3 meta-metamodel (see Building a Content Management Framework)
- The GitLab.Ci.Yaml series, which establishes the four-project pattern this series adapts
- The Ops.Dsl Ecosystem series, which defines the 22 Ops sub-DSLs that bridge into Kubernetes.Dsl
Related posts
- GitLab.Ci.Yaml: Schema-Driven Typed Pipelines for .NET — the schema-driven SG pattern this series reuses
- BinaryWrapper — the recursive
--helpintrospection pattern that powers the typedkubectlclient - Ops DSL Ecosystem: Terraform for C# — the 22 Ops sub-DSLs that emit into Kubernetes.Dsl
- Auto-Documentation from a Typed System — introduced the first 5 Ops sub-DSLs
- Contention over Convention over Configuration over Code — the Attribute + SG + Analyzer meta-pattern
- Building a Content Management Framework — the M3/M2/M1 meta-metamodel