CRDs as First-Class Citizens
Most of what runs in modern Kubernetes is CRDs. Argo Rollouts. Prometheus Operator. KEDA. cert-manager. Gatekeeper. Istio. Litmus. External Secrets. Crossplane. Knative. Each one ships its own CustomResourceDefinition YAML with an openAPIV3Schema block, its own version cadence, its own deprecation policy. The official KubernetesClient/csharp does not type any of these — users hand-write wrappers per CRD or work with JsonElement blobs.
Kubernetes.Dsl treats CRDs the same way it treats core types. Same emitter shape (CrdSchemaEmitter is a thin wrapper around OpenApiV3SchemaEmitter). Same [SinceVersion]/[UntilVersion] annotations. Same fluent builders. Same Roslyn analyzers. The CRD sprawl becomes 150 more typed classes in the same namespace tree, indistinguishable in feel from core V1Pod.
This chapter walks through the seven supported CRD bundles, the multi-tag versioning story, the in-house CRD path, and the CRD-specific analyzer rules.
The seven supported CRD bundles
| Bundle | Group | Sample types | Why it matters |
|---|---|---|---|
| Argo Rollouts | argoproj.io/v1alpha1 |
Rollout, AnalysisTemplate, Experiment, ClusterAnalysisTemplate |
Canary deployments, progressive delivery |
| Prometheus Operator | monitoring.coreos.com/v1 |
ServiceMonitor, PodMonitor, PrometheusRule, Alertmanager, Prometheus |
Operator-native metrics + alert rules |
| KEDA | keda.sh/v1alpha1 |
ScaledObject, ScaledJob, TriggerAuthentication, ClusterTriggerAuthentication |
Event-driven autoscaling |
| cert-manager | cert-manager.io/v1 |
Certificate, Issuer, ClusterIssuer, CertificateRequest |
TLS certificate provisioning |
| Gatekeeper | templates.gatekeeper.sh/v1 + constraints.gatekeeper.sh/v1beta1 |
ConstraintTemplate, Constraint (dynamically-named) |
OPA admission policies |
| Istio (security) | security.istio.io/v1beta1 |
PeerAuthentication, AuthorizationPolicy, RequestAuthentication |
Service mesh mTLS + authz |
| Litmus Chaos | litmuschaos.io/v1alpha1 |
ChaosEngine, ChaosSchedule, ChaosExperiment, ChaosResult |
Chaos engineering |
These seven cover the Cloud-tier output targets of every K8s-emitting Ops.Dsl sub-DSL (verified in Part 14). Adding an eighth bundle is one CLI command (fetch --crd), one row in _sources.json, and zero code changes to the SG.
How CrdSchemaEmitter differs from OpenApiV3SchemaEmitter
Not by much. The bulk of the work happens in CrdEnvelopeWalker (Part 4), which extracts the spec.versions[*].schema.openAPIV3Schema block from the CRD wrapper and yields one CrdSchemaSlice per served: true version. After that, CrdSchemaEmitter delegates to the same emission pipeline that OpenApiV3SchemaEmitter uses for core types.
// Kubernetes.Dsl.SourceGenerator/Emit/CrdSchemaEmitter.cs
public static class CrdSchemaEmitter
{
public static void Emit(
SourceProductionContext spc,
UnifiedSchema crdUnified,
KubernetesBundleConfig config)
{
foreach (var type in crdUnified.Types)
{
// Same POCO emission as core types
var pocoSource = OpenApiV3SchemaEmitter.EmitPoco(type, isCrd: true);
spc.AddSource($"Crds/{type.FullPath}.g.cs",
SourceText.From(pocoSource, Encoding.UTF8));
// Same builder emission via Builder.SourceGenerator.Lib
var builderModel = BuilderHelper.CreateModel(type);
var builderSource = BuilderEmitter.Emit(builderModel);
spc.AddSource($"Crds/{type.FullPath}Builder.g.cs",
SourceText.From(builderSource, Encoding.UTF8));
}
}
}// Kubernetes.Dsl.SourceGenerator/Emit/CrdSchemaEmitter.cs
public static class CrdSchemaEmitter
{
public static void Emit(
SourceProductionContext spc,
UnifiedSchema crdUnified,
KubernetesBundleConfig config)
{
foreach (var type in crdUnified.Types)
{
// Same POCO emission as core types
var pocoSource = OpenApiV3SchemaEmitter.EmitPoco(type, isCrd: true);
spc.AddSource($"Crds/{type.FullPath}.g.cs",
SourceText.From(pocoSource, Encoding.UTF8));
// Same builder emission via Builder.SourceGenerator.Lib
var builderModel = BuilderHelper.CreateModel(type);
var builderSource = BuilderEmitter.Emit(builderModel);
spc.AddSource($"Crds/{type.FullPath}Builder.g.cs",
SourceText.From(builderSource, Encoding.UTF8));
}
}
}The isCrd: true flag tells EmitPoco to add the [CustomResourceDefinition] attribute and the [StorageVersion] flag (when applicable). Everything else is identical.
A complete CRD example: V1Alpha1Rollout
Source: schemas/crds/argo-rollouts/v1.7.2/rollout-crd.yaml (the upstream CRD bundle).
// <auto-generated/> Source: argoproj.io/v1alpha1 Rollout CRD (argo-rollouts/v1.7.2)
namespace Kubernetes.Dsl.Crds.ArgoProj.V1Alpha1;
[KubernetesResource(ApiVersion = "argoproj.io/v1alpha1", Kind = "Rollout")]
[CustomResourceDefinition("rollouts.argoproj.io")]
[StorageVersion]
[SinceVersion("argo-rollouts/v1.0.0")]
public sealed partial class V1Alpha1Rollout : IKubernetesObject<V1ObjectMeta>
{
public string ApiVersion { get; set; } = "argoproj.io/v1alpha1";
public string Kind { get; set; } = "Rollout";
public V1ObjectMeta Metadata { get; set; } = new();
public RolloutSpec Spec { get; set; } = new();
[YamlMember(SerializeAs = SerializeAs.OmitOnWrite)]
public RolloutStatus? Status { get; set; }
}
public sealed partial class RolloutSpec
{
[SinceVersion("argo-rollouts/v1.0.0")]
public int? Replicas { get; set; }
[SinceVersion("argo-rollouts/v1.0.0")]
public RolloutStrategy? Strategy { get; set; }
[SinceVersion("argo-rollouts/v1.0.0")]
public V1PodTemplateSpec? Template { get; set; }
[SinceVersion("argo-rollouts/v1.0.0")]
public V1LabelSelector? Selector { get; set; }
[SinceVersion("argo-rollouts/v1.7.0")]
public int? ProgressDeadlineSeconds { get; set; }
[SinceVersion("argo-rollouts/v1.5.0")]
[UntilVersion("argo-rollouts/v1.6.999")]
[Deprecated("Renamed to ProgressDeadlineSeconds in argo-rollouts/v1.7.0")]
public int? ProgressDeadlineAbortSeconds { get; set; }
}
public sealed partial class RolloutStrategy
{
// oneOf: canary | blueGreen
public CanaryStrategy? Canary { get; set; }
public BlueGreenStrategy? BlueGreen { get; set; }
}// <auto-generated/> Source: argoproj.io/v1alpha1 Rollout CRD (argo-rollouts/v1.7.2)
namespace Kubernetes.Dsl.Crds.ArgoProj.V1Alpha1;
[KubernetesResource(ApiVersion = "argoproj.io/v1alpha1", Kind = "Rollout")]
[CustomResourceDefinition("rollouts.argoproj.io")]
[StorageVersion]
[SinceVersion("argo-rollouts/v1.0.0")]
public sealed partial class V1Alpha1Rollout : IKubernetesObject<V1ObjectMeta>
{
public string ApiVersion { get; set; } = "argoproj.io/v1alpha1";
public string Kind { get; set; } = "Rollout";
public V1ObjectMeta Metadata { get; set; } = new();
public RolloutSpec Spec { get; set; } = new();
[YamlMember(SerializeAs = SerializeAs.OmitOnWrite)]
public RolloutStatus? Status { get; set; }
}
public sealed partial class RolloutSpec
{
[SinceVersion("argo-rollouts/v1.0.0")]
public int? Replicas { get; set; }
[SinceVersion("argo-rollouts/v1.0.0")]
public RolloutStrategy? Strategy { get; set; }
[SinceVersion("argo-rollouts/v1.0.0")]
public V1PodTemplateSpec? Template { get; set; }
[SinceVersion("argo-rollouts/v1.0.0")]
public V1LabelSelector? Selector { get; set; }
[SinceVersion("argo-rollouts/v1.7.0")]
public int? ProgressDeadlineSeconds { get; set; }
[SinceVersion("argo-rollouts/v1.5.0")]
[UntilVersion("argo-rollouts/v1.6.999")]
[Deprecated("Renamed to ProgressDeadlineSeconds in argo-rollouts/v1.7.0")]
public int? ProgressDeadlineAbortSeconds { get; set; }
}
public sealed partial class RolloutStrategy
{
// oneOf: canary | blueGreen
public CanaryStrategy? Canary { get; set; }
public BlueGreenStrategy? BlueGreen { get; set; }
}The V1PodTemplateSpec reference points to the core namespace (Kubernetes.Dsl.Api.Core.V1.V1PodTemplateSpec). CRDs that embed core types (Rollout embeds a PodTemplateSpec, ScaledObject embeds a ScaleTargetRef, etc.) reference the core types directly. There's only one V1PodTemplateSpec in the whole compilation.
The CRD builder
// <auto-generated/> via Builder.SourceGenerator.Lib
namespace Kubernetes.Dsl.Crds.ArgoProj.V1Alpha1;
public sealed partial class V1Alpha1RolloutBuilder : AbstractBuilder<V1Alpha1Rollout>
{
public V1Alpha1RolloutBuilder WithMetadata(Action<V1ObjectMetaBuilder> configure) { /* ... */ }
public V1Alpha1RolloutBuilder WithSpec(Action<RolloutSpecBuilder> configure) { /* ... */ }
protected override Result<V1Alpha1Rollout> BuildCore() { /* ... */ }
}
public sealed partial class RolloutSpecBuilder : AbstractBuilder<RolloutSpec>
{
public RolloutSpecBuilder WithReplicas(int replicas) { /* ... */ }
public RolloutSpecBuilder WithSelector(Action<V1LabelSelectorBuilder> configure) { /* ... */ }
public RolloutSpecBuilder WithTemplate(Action<V1PodTemplateSpecBuilder> configure) { /* ... */ }
public RolloutSpecBuilder WithStrategy(Action<RolloutStrategyBuilder> configure) { /* ... */ }
public RolloutSpecBuilder WithProgressDeadlineSeconds(int seconds) { /* ... */ }
}// <auto-generated/> via Builder.SourceGenerator.Lib
namespace Kubernetes.Dsl.Crds.ArgoProj.V1Alpha1;
public sealed partial class V1Alpha1RolloutBuilder : AbstractBuilder<V1Alpha1Rollout>
{
public V1Alpha1RolloutBuilder WithMetadata(Action<V1ObjectMetaBuilder> configure) { /* ... */ }
public V1Alpha1RolloutBuilder WithSpec(Action<RolloutSpecBuilder> configure) { /* ... */ }
protected override Result<V1Alpha1Rollout> BuildCore() { /* ... */ }
}
public sealed partial class RolloutSpecBuilder : AbstractBuilder<RolloutSpec>
{
public RolloutSpecBuilder WithReplicas(int replicas) { /* ... */ }
public RolloutSpecBuilder WithSelector(Action<V1LabelSelectorBuilder> configure) { /* ... */ }
public RolloutSpecBuilder WithTemplate(Action<V1PodTemplateSpecBuilder> configure) { /* ... */ }
public RolloutSpecBuilder WithStrategy(Action<RolloutStrategyBuilder> configure) { /* ... */ }
public RolloutSpecBuilder WithProgressDeadlineSeconds(int seconds) { /* ... */ }
}Same builder shape as core types. Same Result<T> validation. Same WithXxx() fluent API. Indistinguishable from V1DeploymentBuilder at the call site.
Multi-tag CRD ingestion
CRD bundles version independently of K8s itself. Argo Rollouts releases on its own cadence (~quarterly), and a single repo may need to support multiple Rollout versions across staging/prod. The downloader fetches multiple tags into the same bundle directory:
# Fetch four tags of argo-rollouts at once
dotnet run --project Kubernetes.Dsl.Design -- fetch --crd argo-rollouts@v1.5.0,v1.6.0,v1.7.0,v1.7.2# Fetch four tags of argo-rollouts at once
dotnet run --project Kubernetes.Dsl.Design -- fetch --crd argo-rollouts@v1.5.0,v1.6.0,v1.7.0,v1.7.2schemas/crds/argo-rollouts/
├── v1.5.0/rollout-crd.yaml
├── v1.6.0/rollout-crd.yaml
├── v1.7.0/rollout-crd.yaml
└── v1.7.2/rollout-crd.yamlschemas/crds/argo-rollouts/
├── v1.5.0/rollout-crd.yaml
├── v1.6.0/rollout-crd.yaml
├── v1.7.0/rollout-crd.yaml
└── v1.7.2/rollout-crd.yamlSchemaVersionMerger.MergeCrds (Part 5) walks all four tags, groups by (group, kind, version), and emits one merged C# type per (group, kind, version) tuple with per-property [SinceVersion]/[UntilVersion] annotations.
The TargetClusterCompatibility array on [KubernetesBundle] accepts CRD-prefixed identifiers:
[assembly: KubernetesBundle(
Crds = new[] { "argo-rollouts" },
TargetClusterCompatibility = new[]
{
"1.30", "1.31",
"argo-rollouts/v1.6.0", // legacy cluster
"argo-rollouts/v1.7.2" // production cluster
})][assembly: KubernetesBundle(
Crds = new[] { "argo-rollouts" },
TargetClusterCompatibility = new[]
{
"1.30", "1.31",
"argo-rollouts/v1.6.0", // legacy cluster
"argo-rollouts/v1.7.2" // production cluster
})]Code that uses RolloutSpec.ProgressDeadlineSeconds (since v1.7.0) would trigger KUB020 because the legacy cluster runs v1.6.0. The user either:
- Conditionally avoids the property (with a
[KubernetesBundle.IfTargetSupports("argo-rollouts/v1.7.0")]guard, future feature), - Uses the deprecated
ProgressDeadlineAbortSeconds(which generates a differentKUB020warning for the prod cluster), or - Fixes the legacy cluster.
Either way the trade-off is visible at compile time. No silent breakage.
In-house CRDs
Real teams have private CRDs. The path is the same as upstream bundles: drop the YAML in schemas/crds/local/ (or anywhere), add a row to _sources.json, the SG ingests it.
# Copy a private CRD into schemas/crds/local/
dotnet run --project Kubernetes.Dsl.Design -- fetch --crd-file ./acme-widget-crd.yaml# Copy a private CRD into schemas/crds/local/
dotnet run --project Kubernetes.Dsl.Design -- fetch --crd-file ./acme-widget-crd.yamlschemas/crds/local/
└── acme-widget-crd.yamlschemas/crds/local/
└── acme-widget-crd.yaml{
"schemas/crds/local/acme-widget-crd.yaml": {
"local": true,
"kind": "crd-yaml",
"format": "yaml",
"served": ["v1"],
"storage": "v1"
}
}{
"schemas/crds/local/acme-widget-crd.yaml": {
"local": true,
"kind": "crd-yaml",
"format": "yaml",
"served": ["v1"],
"storage": "v1"
}
}The local: true flag bypasses URL and SHA-256 enforcement (it's your file, not an upstream pin). The SG ingests it on the next build and emits V1AcmeWidget.g.cs next to the upstream bundles.
Multi-version in-house CRDs work the same way:
schemas/crds/local/acme-widget/
├── v1alpha1/widget-crd.yaml
├── v1beta1/widget-crd.yaml
└── v1/widget-crd.yamlschemas/crds/local/acme-widget/
├── v1alpha1/widget-crd.yaml
├── v1beta1/widget-crd.yaml
└── v1/widget-crd.yamlSame [SinceVersion]/[UntilVersion] annotations as upstream bundles. The merger doesn't care that the bundle is "local" vs upstream — it just walks the directory.
The seven CRD-specific analyzer rules (KUB080–KUB099)
| Code | Severity | Title | Triggers when |
|---|---|---|---|
KUB080 |
Warning | Argo Rollout has no analysis template |
A Rollout with strategy.canary has no analysis block (no automated success criteria) |
KUB081 |
Warning | KEDA ScaledObject targets a Deployment not declared in this assembly |
Cross-resource: ScaledObject.spec.scaleTargetRef.name doesn't match any V1Deployment.metadata.name |
KUB082 |
Warning | CRD type is not the storage version | Constructing a served-but-not-storage variant for a write that needs to persist |
KUB083 |
Warning | Prometheus ServiceMonitor selector matches no Service |
Cross-resource: ServiceMonitor.spec.selector doesn't match any V1Service.metadata.labels |
KUB084 |
Warning | cert-manager Certificate references missing Issuer |
Cross-resource: Certificate.spec.issuerRef.name doesn't resolve in this compilation |
KUB085 |
Info | Istio PeerAuthentication mTLS mode is PERMISSIVE |
Best practice: prefer STRICT for service-to-service traffic |
KUB086 |
Warning | Gatekeeper Constraint references undefined ConstraintTemplate |
Cross-resource: Constraint.kind doesn't match any ConstraintTemplate.spec.crd.spec.names.kind in the same compilation |
These follow the same KUB060 cross-resource pattern as core analyzers (Part 11). They use RegisterCompilationStartAction to accumulate state across the whole compilation, then RegisterCompilationEndAction to do the cross-check.
Why we didn't write a generic "any CRD" wrapper
Two approaches were considered:
- Generic
CustomResource<TSpec, TStatus>— one wrapper class, users provide spec/status types as generics. Easy to ship, but loses the analyzer story (no per-CRD diagnostics) and the version-aware annotations. - Per-CRD typed wrappers — one fully-typed class per CRD per served version, with
[SinceVersion]/[UntilVersion]and CRD-specific diagnostics.
Option 2 is more code (and more .g.cs files) but it's what users actually need. The point of typing CRDs is to get the same compile-time guarantees as typing core types. A generic wrapper that hides the spec behind JsonElement is no better than the official client's untyped CustomResource class.
The schema-driven SG makes option 2 cheap because the cost is one downloader command per bundle and zero code changes to the emitter. Adding an eighth bundle (fetch --crd crossplane@v1.16.0) costs ~2 minutes of work, mostly waiting for the download. The 150 generated CRD types appear on the next build.
What CRD support does not do
- Does not implement conversion webhooks. The API server handles version conversion at runtime; Kubernetes.Dsl is dev-side and doesn't see the conversion. Users author against the version that matches their cluster, and the analyzer enforces the choice.
- Does not generate the CRD itself. The
apiextensions.k8s.io/v1 CustomResourceDefinitionfor your in-house CRD is hand-written (or generated by a separate tool likekubebuilder). Kubernetes.Dsl ingests the CRD YAML — it doesn't author it. - Does not generate operators or controllers. That's
KubernetesClient/csharp's job, plusKubeOpsor similar libraries. Kubernetes.Dsl writes manifests; controllers reconcile them.
The division of labor is clean: Kubernetes.Dsl handles the author side of every type (core and CRD) with full version awareness. Other libraries handle the runtime side. They share zero code and they should.
Previous: Part 8: Incremental Generator Performance — Surviving 600 Types × 5 Versions Next: Part 10: kubectl as a BinaryWrapper Target