High-Level Architecture
Kubernetes.Dsl ships as eight projects organized around two independent tracks, plus a small set of shared runtime types. Both tracks share a contributor pattern, the YAML reader/writer, and the analyzer infrastructure, but their inputs are completely different and they could theoretically be shipped separately.
Two tracks, one library
Red = new code. Green = reused as-is from the FrenchExDev ecosystem.
The two tracks meet at exactly one place: a hand-written ~30-LOC shim (KubectlClientExtensions) that lets you call client.ApplyAsync(typedPod) where the typed pod comes from Track B and the client comes from Track A. The shim serializes the pod via the shared KubernetesYamlWriter, drops it to a temp file, and invokes the Track A typed Apply() builder. Part 10 walks through it.
The eight projects
| # | Project | Output | Role |
|---|---|---|---|
| 1 | Kubernetes.Dsl.Attributes |
netstandard2.0 lib | [KubernetesBundle], [KubernetesContributor], [KubernetesResource], [SinceVersion], [UntilVersion], [StorageVersion] |
| 2 | Kubernetes.Dsl.Schemas.Downloader |
netstandard2.0 lib + console front-end | Downloads OpenAPI dumps + CRD YAML to schemas/. Refresh-only. No build-time role. |
| 3 | Kubernetes.Dsl.Design |
Console exe | Two subcommands: fetch (Track B refresh) and introspect (Track A kubectl tree capture in container) |
| 4 | Kubernetes.Dsl.SourceGenerator |
Roslyn analyzer pack | Hosts SchemaInputReader, OpenApiV3SchemaEmitter, CrdSchemaEmitter. Reads <AdditionalFiles> from schemas/. Calls BuilderEmitter.Emit() per type. |
| 5 | Kubernetes.Dsl.Lib |
NuGet | The package end users reference. Hand-written KubernetesYamlReader/Writer, IntOrString, Quantity, IKubernetesObject, IKubernetesContributor, KubernetesBundleBuilder. SG runs transitively. |
| 6 | Kubernetes.Dsl.Cli |
NuGet | Track A output. Includes the BinaryWrapper-generated KubectlClient and the F3 hand-written shim (KubectlClientExtensions). |
| 7 | Kubernetes.Dsl.Analyzers |
Roslyn analyzer pack (in Kubernetes.Dsl.Lib) |
KUB001–KUB099 |
| 8 | Kubernetes.Dsl.Tests |
xUnit | Round-trip, golden-file, schema-pinning, analyzer tests |
This is the same four-project pattern used by GitLab.Ci.Yaml (Attributes, Design, SourceGenerator, Lib) extended with three K8s-specific projects (Schemas.Downloader, Cli, Analyzers) and the test project. Familiar shape, larger scope.
Three NuGet packages users see
| Package | Contains | Use case |
|---|---|---|
Kubernetes.Dsl.Lib |
POCOs + builders + analyzers + YAML I/O + contributor runtime | Author manifests in C#, write to YAML, check in |
Kubernetes.Dsl.Cli |
typed KubectlClient + bridge shim |
Apply manifests at runtime from .NET |
Kubernetes.Dsl.Schemas |
Pinned schemas/**/*.json + schemas/**/*.yaml content files |
Pulled in transitively; users can override with their own folder |
The downloader CLI (Kubernetes.Dsl.Design) is not distributed as a NuGet package. It's run from source via dotnet run --project Kubernetes.Dsl.Design -- when refreshing schemas.
The single user-facing knob: [KubernetesBundle]
End users opt in with one assembly attribute in any file in their consuming project:
[assembly: KubernetesBundle(
Groups = "core/v1, apps/v1, networking.k8s.io/v1",
KubernetesVersion = "1.31",
Crds = new[] { "argo-rollouts", "prometheus-operator", "keda" },
SchemaRoot = "$(SolutionDir)schemas",
TargetClusterCompatibility = new[] { "1.30", "1.31" })][assembly: KubernetesBundle(
Groups = "core/v1, apps/v1, networking.k8s.io/v1",
KubernetesVersion = "1.31",
Crds = new[] { "argo-rollouts", "prometheus-operator", "keda" },
SchemaRoot = "$(SolutionDir)schemas",
TargetClusterCompatibility = new[] { "1.30", "1.31" })]Attribute definition (in Kubernetes.Dsl.Attributes):
[AttributeUsage(AttributeTargets.Assembly, AllowMultiple = false)]
public sealed class KubernetesBundleAttribute : Attribute
{
public string Groups { get; set; } = "core/v1, apps/v1";
public string KubernetesVersion { get; set; } = "1.31";
public string[] Crds { get; set; } = Array.Empty<string>();
public string SchemaRoot { get; set; } = "schemas";
public string[] TargetClusterCompatibility { get; set; } = Array.Empty<string>();
}[AttributeUsage(AttributeTargets.Assembly, AllowMultiple = false)]
public sealed class KubernetesBundleAttribute : Attribute
{
public string Groups { get; set; } = "core/v1, apps/v1";
public string KubernetesVersion { get; set; } = "1.31";
public string[] Crds { get; set; } = Array.Empty<string>();
public string SchemaRoot { get; set; } = "schemas";
public string[] TargetClusterCompatibility { get; set; } = Array.Empty<string>();
}The SG uses ForAttributeWithMetadataName("Kubernetes.Dsl.Attributes.KubernetesBundleAttribute") to register, then loads <AdditionalFiles> matching {SchemaRoot}/k8s/{KubernetesVersion}/apis.{group}.json for each declared group, and {SchemaRoot}/crds/{name}/**/*.yaml for each declared CRD bundle.
The TargetClusterCompatibility array is what powers KUB020 (deprecated apiVersion warnings). If you target ["1.30", "1.31"] and use a property whose [UntilVersion] is "1.25", the analyzer flags it.
<AdditionalFiles> ownership
Schemas live at the solution root (schemas/), not under any single project. Each consuming .csproj opts in with three lines:
<ItemGroup>
<AdditionalFiles Include="$(SolutionDir)schemas\**\*.json" />
<AdditionalFiles Include="$(SolutionDir)schemas\**\*.yaml" />
<AdditionalFiles Include="$(SolutionDir)schemas\**\*.yml" />
</ItemGroup><ItemGroup>
<AdditionalFiles Include="$(SolutionDir)schemas\**\*.json" />
<AdditionalFiles Include="$(SolutionDir)schemas\**\*.yaml" />
<AdditionalFiles Include="$(SolutionDir)schemas\**\*.yml" />
</ItemGroup>This is documented in Part 4. Users with monorepos point at a different SchemaRoot; users with private schemas drop them in schemas/crds/local/.
Schemas-on-disk layout
Schemas live in the repo, in their native upstream format:
schemas/ (committed to git)
├── _sources.json (URL, tag, SHA-256, license, fetch date, format)
├── LICENSE-NOTICES (Apache-2.0 attributions)
├── k8s/
│ ├── 1.27/
│ │ ├── api.v1.json (upstream native: JSON)
│ │ ├── apis.apps.v1.json
│ │ ├── apis.networking.k8s.io.v1.json
│ │ └── ...
│ ├── 1.28/ ...
│ ├── 1.29/ ...
│ ├── 1.30/ ...
│ └── 1.31/ ...
└── crds/
├── argo-rollouts/
│ ├── v1.5.0/rollout-crd.yaml (upstream native: YAML, multi-tag)
│ ├── v1.6.0/rollout-crd.yaml
│ ├── v1.7.0/rollout-crd.yaml
│ └── v1.7.2/rollout-crd.yaml
├── prometheus-operator/
│ ├── v0.74.0/servicemonitor-crd.yaml
│ └── v0.75.0/servicemonitor-crd.yaml
├── keda/v2.14.0/scaledobject-crd.yaml
├── cert-manager/v1.15.0/certificate-crd.yaml
├── gatekeeper/v3.16.0/constrainttemplate-crd.yaml
├── istio/v1.22.0/peerauthentication-crd.yaml
├── litmus/v3.10.0/chaosengine-crd.yaml
└── local/ (in-house CRDs, native YAML)
└── acme-widget-crd.yamlschemas/ (committed to git)
├── _sources.json (URL, tag, SHA-256, license, fetch date, format)
├── LICENSE-NOTICES (Apache-2.0 attributions)
├── k8s/
│ ├── 1.27/
│ │ ├── api.v1.json (upstream native: JSON)
│ │ ├── apis.apps.v1.json
│ │ ├── apis.networking.k8s.io.v1.json
│ │ └── ...
│ ├── 1.28/ ...
│ ├── 1.29/ ...
│ ├── 1.30/ ...
│ └── 1.31/ ...
└── crds/
├── argo-rollouts/
│ ├── v1.5.0/rollout-crd.yaml (upstream native: YAML, multi-tag)
│ ├── v1.6.0/rollout-crd.yaml
│ ├── v1.7.0/rollout-crd.yaml
│ └── v1.7.2/rollout-crd.yaml
├── prometheus-operator/
│ ├── v0.74.0/servicemonitor-crd.yaml
│ └── v0.75.0/servicemonitor-crd.yaml
├── keda/v2.14.0/scaledobject-crd.yaml
├── cert-manager/v1.15.0/certificate-crd.yaml
├── gatekeeper/v3.16.0/constrainttemplate-crd.yaml
├── istio/v1.22.0/peerauthentication-crd.yaml
├── litmus/v3.10.0/chaosengine-crd.yaml
└── local/ (in-house CRDs, native YAML)
└── acme-widget-crd.yamlThe .json extension means the file came from kubernetes/kubernetes' OpenAPI v3 dump. The .yaml extension means it came from a CRD bundle's manifests/crds/. The SG dispatches on extension. No format conversion at fetch time. Part 4 explains the dispatcher.
Generated output layout
obj/Generated/Kubernetes.Dsl/
├── Models/
│ ├── Core.V1.Pod.g.cs
│ ├── Core.V1.Service.g.cs
│ ├── Apps.V1.Deployment.g.cs
│ └── ...
├── Builders/
│ ├── Core.V1.PodBuilder.g.cs
│ └── ...
├── Crds/
│ ├── ArgoProj.V1Alpha1.Rollout.g.cs
│ └── ...
└── _Manifest.g.cs (registry of every generated type)obj/Generated/Kubernetes.Dsl/
├── Models/
│ ├── Core.V1.Pod.g.cs
│ ├── Core.V1.Service.g.cs
│ ├── Apps.V1.Deployment.g.cs
│ └── ...
├── Builders/
│ ├── Core.V1.PodBuilder.g.cs
│ └── ...
├── Crds/
│ ├── ArgoProj.V1Alpha1.Rollout.g.cs
│ └── ...
└── _Manifest.g.cs (registry of every generated type)One file per type. Predictable. Diffable. _Manifest.g.cs is what the analyzers and the type registry consume.
What gets reused, what's new
The honest accounting:
| Building block | Source | Role |
|---|---|---|
BinaryWrapper.Design.Lib (HelpScraper, CobraHelpParser, container runtime) |
BinaryWrapper |
Track A: probe kubectl --help recursively, build typed client |
BinaryWrapper.SourceGenerator |
BinaryWrapper |
Generate the typed KubectlClient from the captured command tree |
Builder.SourceGenerator.Lib (BuilderEmitter.Emit(...)) |
Builder.SourceGenerator.Lib |
Pure function. Called from OpenApiV3SchemaEmitter to emit fluent builders. |
| Four-project layout pattern | GitLab.Ci.Yaml |
Project structure only (Attributes, Design, SourceGenerator, Lib). Contents differ. |
SchemaVersionMerger pattern |
GitLab.Ci.Yaml.SourceGenerator |
Merge multi-version schemas into one model with per-property [SinceVersion]/[UntilVersion]. Adapted to operate on JsonNode. |
YamlDotNet |
NuGet (already centrally pinned at 16.3.0) | YAML parsing in the SG, runtime YAML I/O |
| New component | Why it can't be reused |
|---|---|
SchemaInputReader |
Format dispatcher. ~20 LOC. New because no existing SG in the FrenchExDev ecosystem reads YAML. |
OpenApiV3SchemaEmitter |
OpenAPI v3 has discriminator, nullable: true, allOf-as-inheritance, K8s vendor extensions (x-kubernetes-*). GitLab.Ci.Yaml's SchemaReader parses GitLab JSON Schema with different $ref rules. Different beast. |
CrdSchemaEmitter |
CRDs ship a CustomResourceDefinition envelope with the actual schema buried at spec.versions[*].schema.openAPIV3Schema. Walks the envelope, then delegates the schema to the same OpenApiV3SchemaEmitter shape. |
KubernetesYamlReader/Writer |
K8s YAML has multi-doc streams, apiVersion/kind discriminators, status omission, IntOrString, Quantity. Different enough from GitLab's flat root that it needs its own implementation. |
Kubernetes.Dsl.Analyzers |
KUB001–KUB099 are new diagnostics. The analyzer scaffolding pattern is reused from the Contention over Convention series. |
| Type registry generator | KubernetesTypeRegistry.g.cs powers the YAML reader's discriminator dispatch. Built from x-kubernetes-group-version-kind extensions. |
| Format dispatcher | SchemaInputReader (above) — ~20 LOC, dispatches .json to System.Text.Json and .yaml to YamlDotNet. |
The honest summary: the only genuinely new pipeline components are the OpenAPI v3 emitter, the CRD envelope walker, the K8s YAML reader/writer, the analyzer pack, the type registry generator, and the format dispatcher. Everything else is reused or follows an established pattern.
How a build proceeds
Zero network. Zero containers. Zero apiserver. The schemas are local files. The SG is a pure function from JsonNode to string. The whole pipeline runs inside the Roslyn host. Refreshing schemas is a separate, opt-in action driven by the downloader CLI.
What's deferred to later parts
- Part 3 explains the schemas downloader and where each schema comes from upstream.
- Part 4 explains the
SchemaInputReaderdispatcher and why YAML and JSON converge to one shape. - Part 5 explains version merging across both core K8s minors and CRD bundle tags.
- Part 6 explains the emitter, the special types, and the discriminated unions.
- Part 7 explains the YAML writer.
- Part 8 explains how the SG survives 600 types × 5 versions.
- Parts 9–12 are the K8s-specific surface (CRDs, kubectl wrapper, analyzers, contributors).
- Parts 13–14 are the Ops.Dsl bridges and the composition walkthrough.
- Part 15 is the comparison and the vision.
Previous: Part 1: The Problem — YAML Manifests, Drift, and the Untyped Cluster Next: Part 3: Schema Acquisition — Where Schemas Come From