Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Roslyn Analyzers

The CMF philosophy is Attribute + Source Generator + Analyzer + NuGet. Per the Contention over Convention series, every typed system in this ecosystem ships analyzers, not just generators. Kubernetes.Dsl's analyzer pack — KUB001 through KUB099 — operates on user code that constructs Kubernetes.Dsl types and on generated bridge code from Ops.Dsl.

This chapter walks through the diagnostic catalog, the cross-resource validation pattern (KUB060KUB069), and how the KUB* codes layer with the Ops.Dsl OPS* codes.

The diagnostic ID range

Reserved up front so chapters can cite real codes:

Range Category
KUB001KUB019 Required-field / shape errors
KUB020KUB039 Deprecation warnings (deprecated apiVersions, removed-in-version)
KUB040KUB059 Best-practice warnings (no resource limits, no liveness probe, latest image tag)
KUB060KUB079 Cross-resource validation (Service selector matches no Pod template, etc.)
KUB080KUB099 CRD-specific (Argo, Istio, Prometheus Operator, KEDA, ...)

KUB001KUB019: Required-field / shape errors

Code Severity Title Triggers when
KUB001 Error oneOf violation: multiple variants set More than one of EmptyDir/ConfigMap/Secret/... is set on the same V1Volume
KUB002 Error Required field missing V1Pod.Metadata.Name is null when Build() is called or POCO is initialized inline
KUB003 Error Required field missing on collection element A required field is null inside an item of a collection (e.g., V1Container.Name inside Pod.Spec.Containers)
KUB004 Error Empty required collection A required collection is empty (e.g., V1PodSpec.Containers with zero items)
KUB005 Error Invalid label value A label value violates the K8s label-syntax rules (must match [a-z0-9A-Z]([-a-z0-9A-Z]*[a-z0-9A-Z])?)
KUB006 Error Invalid name (DNS-1123 subdomain) A metadata.name violates the DNS-1123 subdomain rules
KUB007 Error Quantity has invalid format Quantity.Parse("not-a-quantity") throws at runtime — the analyzer flags it at compile time

KUB020KUB039: Deprecation

Code Severity Title Triggers when
KUB020 Warning Deprecated apiVersion A property's [UntilVersion] is older than the user's declared TargetClusterCompatibility
KUB021 Warning Removed apiVersion A property's [UntilVersion] is older than ALL targets in TargetClusterCompatibility
KUB022 Warning Deprecated CRD bundle property Same as KUB020 but for CRD-prefixed versions (argo-rollouts/v1.7.0)
KUB023 Info Property added in newer version A property's [SinceVersion] is newer than the OLDEST target in TargetClusterCompatibility

KUB040KUB059: Best practices

Code Severity Title Triggers when
KUB040 Warning Container has no resource limits A V1Container has null Resources.Limits
KUB041 Warning Container has no liveness probe A V1Container has null LivenessProbe
KUB042 Warning Image tag is latest or absent V1Container.Image ends with :latest or has no tag
KUB043 Info Missing recommended labels No app.kubernetes.io/name, app.kubernetes.io/version, app.kubernetes.io/managed-by
KUB044 Warning runAsRoot or no securityContext V1Container.SecurityContext.RunAsNonRoot is null or false
KUB045 Warning privileged container V1Container.SecurityContext.Privileged is true
KUB046 Warning hostNetwork enabled V1PodSpec.HostNetwork is true
KUB047 Warning hostPID enabled V1PodSpec.HostPID is true
KUB048 Warning Service has no selector V1Service.Spec.Selector is null or empty (the service selects no pods)

KUB060KUB079: Cross-resource validation

Code Severity Title Triggers when
KUB060 Warning Service selector matches no Pod template Cross-resource: V1Service.Spec.Selector doesn't match any V1PodTemplateSpec.Metadata.Labels in this compilation
KUB061 Warning ConfigMap reference not satisfied V1Container.EnvFrom.ConfigMapRef names a ConfigMap not declared in this assembly
KUB062 Warning Secret reference not satisfied V1Container.EnvFrom.SecretRef names a Secret not declared in this assembly
KUB063 Warning PVC reference not satisfied V1PersistentVolumeClaimVolumeSource.ClaimName doesn't match any V1PersistentVolumeClaim in this assembly
KUB064 Warning NetworkPolicy podSelector matches no Pod template Cross-resource: V1NetworkPolicy.Spec.PodSelector matches no V1PodTemplateSpec
KUB065 Warning Ingress backend references missing Service V1Ingress.Spec.Rules[].Http.Paths[].Backend.Service.Name doesn't match any V1Service in this assembly
KUB066 Warning RoleBinding subject references missing ServiceAccount V1RoleBinding.Subjects[].Name doesn't match any V1ServiceAccount in this assembly

KUB080KUB099: CRD-specific

(See Part 9 for the catalog. KUB080KUB086 cover Argo Rollouts, KEDA, CRD storage version, ServiceMonitor, cert-manager Issuer, Istio mTLS, and Gatekeeper.)

Cross-resource analyzer mechanics — how KUB060 works

KUB060 ("Service selector matches no Pod template in this assembly") needs whole-compilation visibility. The standard Roslyn pattern is RegisterCompilationStartAction + a thread-safe accumulator + RegisterCompilationEndAction:

[DiagnosticAnalyzer(LanguageNames.CSharp)]
public sealed class KUB060ServiceSelectorMismatch : DiagnosticAnalyzer
{
    public static readonly DiagnosticDescriptor Rule = new(
        id: "KUB060",
        title: "Service selector matches no Pod template",
        messageFormat: "V1Service '{0}' has selector {{{1}}} but no V1PodTemplateSpec in this compilation matches it",
        category: "Kubernetes.Dsl.CrossResource",
        defaultSeverity: DiagnosticSeverity.Warning,
        isEnabledByDefault: true);

    public override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics
        => ImmutableArray.Create(Rule);

    public override void Initialize(AnalysisContext context)
    {
        context.ConfigureGeneratedCodeAnalysis(GeneratedCodeAnalysisFlags.Analyze);
        context.EnableConcurrentExecution();

        context.RegisterCompilationStartAction(compStart =>
        {
            // Collected across the whole compilation:
            var podTemplates = new ConcurrentBag<PodTemplateInfo>();
            var services     = new ConcurrentBag<ServiceInfo>();

            compStart.RegisterSyntaxNodeAction(ctx =>
            {
                if (TryExtractPodTemplate(ctx, out var pt))    podTemplates.Add(pt);
                if (TryExtractServiceSelector(ctx, out var s)) services.Add(s);
            }, SyntaxKind.ObjectCreationExpression, SyntaxKind.InvocationExpression);

            compStart.RegisterCompilationEndAction(end =>
            {
                foreach (var svc in services)
                {
                    if (!podTemplates.Any(pt => pt.Labels.IsSupersetOf(svc.Selector)))
                    {
                        end.ReportDiagnostic(Diagnostic.Create(
                            Rule,
                            svc.Location,
                            svc.Name,
                            FormatSelector(svc.Selector)));
                    }
                }
            });
        });
    }

    private static bool TryExtractServiceSelector(SyntaxNodeAnalysisContext ctx, out ServiceInfo info)
    {
        // Walks ObjectCreationExpressionSyntax for `new V1Service { Spec = ... }` constructions,
        // and InvocationExpressionSyntax for `.WithSelector(...)` builder chains,
        // extracts the (apiVersion, kind, name, selector) tuple, returns true if the node
        // is a V1Service construction.
        // ~80 lines of syntax walking; reusable helper for KUB060-KUB066.
        info = default!;
        return false; // implementation elided
    }

    private static bool TryExtractPodTemplate(SyntaxNodeAnalysisContext ctx, out PodTemplateInfo info)
    {
        // Same shape, walks for V1PodTemplateSpec constructions inside Deployment/StatefulSet/DaemonSet/etc.
        info = default!;
        return false; // implementation elided
    }
}

Three things to notice:

  1. RegisterCompilationStartAction opens a per-compilation scope. The podTemplates and services bags live inside the scope and are GC'd after the compilation finishes.
  2. RegisterSyntaxNodeAction runs in parallel across the syntax tree (EnableConcurrentExecution). The ConcurrentBag is the safe accumulator.
  3. RegisterCompilationEndAction does the cross-check. This runs once after every node has been visited. The cross-check is O(services × podTemplates), which is fine because both collections are small (~dozens, not thousands, in any single project).

The same pattern is used by Microsoft.CodeAnalysis.NetAnalyzers for cross-symbol rules. It's not exotic — but it's worth one paragraph here so readers don't think the cross-resource claims are vapor.

KUB061KUB066 use the same pattern with different accumulators (ConfigMaps, Secrets, PVCs, Ingress backends, RoleBinding subjects).

A simpler analyzer: KUB040

[DiagnosticAnalyzer(LanguageNames.CSharp)]
public sealed class KUB040ContainerHasNoResourceLimits : DiagnosticAnalyzer
{
    public static readonly DiagnosticDescriptor Rule = new(
        id: "KUB040",
        title: "Container has no resource limits",
        messageFormat: "V1Container '{0}' has no Resources.Limits — add CPU/memory limits or accept namespace defaults",
        category: "Kubernetes.Dsl.BestPractices",
        defaultSeverity: DiagnosticSeverity.Warning,
        isEnabledByDefault: true);

    public override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics
        => ImmutableArray.Create(Rule);

    public override void Initialize(AnalysisContext context)
    {
        context.ConfigureGeneratedCodeAnalysis(GeneratedCodeAnalysisFlags.Analyze);
        context.EnableConcurrentExecution();

        // No CompilationStartAction needed — this is local to a single V1Container construction.
        context.RegisterSyntaxNodeAction(AnalyzeContainer, SyntaxKind.ObjectCreationExpression);
    }

    private void AnalyzeContainer(SyntaxNodeAnalysisContext ctx)
    {
        var creation = (ObjectCreationExpressionSyntax)ctx.Node;
        if (!IsV1Container(creation, ctx.SemanticModel)) return;

        var initializer = creation.Initializer;
        if (initializer is null) return;

        var name = ExtractName(initializer);
        var limits = ExtractResourcesLimits(initializer);

        if (limits is null)
        {
            ctx.ReportDiagnostic(Diagnostic.Create(
                Rule, creation.GetLocation(), name ?? "<unnamed>"));
        }
    }
}

Single-node analyzers like this one are the bread and butter of the pack. KUB040KUB048 all follow this shape: walk an object creation, check a property, report.

Layering vs Ops.Dsl analyzers

Diagram
Ops.Dsl analyzers police intent, Kubernetes.Dsl analyzers police manifest shape — two packs, two layers, no overlap.

Layering rule: Ops.Dsl analyzers operate on intent (does the deployment graph make sense?). Kubernetes.Dsl analyzers operate on manifest shape (does this YAML, once written, conform to K8s?). They never overlap.

A practical example: OPS001 (Deployment ordering cycles) catches a logical error in the user's [DeploymentDependency] declarations. The Ops.Dsl bridge generator (Part 13) then produces a V1Deployment with metadata.annotations reflecting the dependency order. If the bridge produces a malformed V1Deployment (say, missing Spec.Template), KUB002 flags the bridge's generated code, surfacing the bug at the bridge author, not at the Ops.Dsl user. Two analyzer packs, two different problems, no overlap.

How analyzers consume [KubernetesBundle].TargetClusterCompatibility

KUB020 (deprecated apiVersion) needs to know the user's target clusters. It reads them from the assembly attribute:

private static IReadOnlyList<VersionId> GetTargets(Compilation compilation)
{
    var attr = compilation.Assembly.GetAttributes()
        .FirstOrDefault(a => a.AttributeClass?.ToDisplayString()
            == "Kubernetes.Dsl.Attributes.KubernetesBundleAttribute");
    if (attr is null) return Array.Empty<VersionId>();

    var targets = attr.NamedArguments
        .FirstOrDefault(na => na.Key == "TargetClusterCompatibility")
        .Value.Values;

    return targets
        .Select(c => VersionId.Parse((string)c.Value!))
        .ToList();
}

The analyzer caches this per-compilation in a CompilationStartAction so it's not re-parsed for every diagnostic.

When the analyzer encounters a property with [SinceVersion("1.29")] and the targets include "1.27", it reports KUB020. When the property has [UntilVersion("argo-rollouts/v1.6.999")] and the targets include "argo-rollouts/v1.7.2", it reports KUB022. When the bundle prefixes don't match (e.g., user targets "keda/v2.14.0" but the property is from argo-rollouts), the analyzer ignores the comparison.

Suppressing diagnostics

Standard Roslyn suppression mechanisms work:

#pragma warning disable KUB040 // Container has no resource limits
var sidecar = new V1ContainerBuilder()
    .WithName("init")
    .WithImage("busybox:1.36")
    .Build().Value;
#pragma warning restore KUB040

Or via .editorconfig:

[*.cs]
dotnet_diagnostic.KUB040.severity = none      # disable globally
dotnet_diagnostic.KUB041.severity = error     # promote liveness probe to error
dotnet_diagnostic.KUB042.severity = silent    # silence latest tag warning in this project

CRD-specific rules can be promoted or demoted independently:

dotnet_diagnostic.KUB081.severity = error     # KEDA target mismatch is an error in our infra
dotnet_diagnostic.KUB085.severity = silent    # Istio PERMISSIVE mTLS warning is too noisy

This is critical for adoption. Teams have different tolerances for "best practice" warnings, and the analyzer pack doesn't impose its opinions — it just provides the diagnostics and lets the team configure severity.

What the analyzer pack does not do

  • Does not enforce policies that require runtime cluster state (e.g., "this PodDisruptionBudget would block a node drain"). That needs the live cluster.
  • Does not run OPA Rego or Conftest. Those are admission-controller tools and run server-side.
  • Does not check security CVEs in container images. That's Trivy/Snyk/Grype's job.
  • Does not verify Helm value substitutions. Helm's own --strict mode catches some of those, but Kubernetes.Dsl is the post-substitution stage.

The pack is exactly what its name says: a set of Roslyn analyzers that operate on C# code that constructs Kubernetes.Dsl types. Everything that needs the cluster, the registry, the runtime, or another tool's domain is out of scope.


Previous: Part 10: kubectl as a BinaryWrapper Target Next: Part 12: Contributors, Bundles, and Helm/Kustomize Interop

⬇ Download