Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Contributors, Bundles, and Helm/Kustomize Interop

Real Kubernetes deployments are not single objects — they're bundles. A typical microservice ships with a Deployment, a Service, a HorizontalPodAutoscaler, an Ingress, a NetworkPolicy, a ConfigMap, an ExternalSecret, a ServiceAccount, a Role, a RoleBinding, and maybe a ServiceMonitor. Twelve resources, all related, all needing to be authored together.

Kubernetes.Dsl uses the contributor pattern (borrowed from IGitLabCiContributor in GitLab.Ci.Yaml) to compose bundles from independent contributor classes. Each contributor adds its slice of the bundle. The runtime collects them, runs the analyzer pack across the merged bundle, and emits a multi-doc YAML stream — or, if the team uses Helm or Kustomize, a directory of separate YAML files in the right shape.

The contributor interface

// Kubernetes.Dsl.Lib/Runtime/IKubernetesContributor.cs (hand-written)
namespace Kubernetes.Dsl.Runtime;

public interface IKubernetesContributor
{
    void Contribute(KubernetesBundleBuilder bundle);
}

[AttributeUsage(AttributeTargets.Class, AllowMultiple = false)]
public sealed class KubernetesContributorAttribute : Attribute
{
    public string Name { get; set; } = string.Empty;
    public int Order { get; set; } = 0;
}

A contributor is just a class with a Contribute(builder) method. The optional attribute is for ordering and metadata. Implementations are pure data assembly — no side effects, no I/O, no cluster calls.

The bundle builder

// Kubernetes.Dsl.Lib/Runtime/KubernetesBundleBuilder.cs (hand-written)
public sealed class KubernetesBundleBuilder
{
    private readonly List<IKubernetesObject> _objects = new();
    public string DefaultNamespace { get; set; } = "default";

    public KubernetesBundleBuilder Add(IKubernetesObject obj)
    {
        _objects.Add(obj);
        return this;
    }

    public KubernetesBundleBuilder AddRange(IEnumerable<IKubernetesObject> objects)
    {
        _objects.AddRange(objects);
        return this;
    }

    public KubernetesBundle Build() => new(_objects.AsReadOnly());
}

public sealed record KubernetesBundle(IReadOnlyList<IKubernetesObject> Objects)
{
    public IEnumerable<T> Of<T>() where T : IKubernetesObject => Objects.OfType<T>();
}

Three operations: Add, AddRange, Build. The output is a KubernetesBundle — a flat list of typed objects that the writer can serialize as multi-doc YAML, the Helm emitter can write into a templates/ directory, or the Kustomize emitter can write into a base/ directory.

A complete contributor

// User code: OrderApiContributor.cs
using Kubernetes.Dsl.Api.Apps.V1;
using Kubernetes.Dsl.Api.Core.V1;
using Kubernetes.Dsl.Api.Networking.V1;
using Kubernetes.Dsl.Runtime;

[KubernetesContributor(Name = "order-api", Order = 100)]
public sealed class OrderApiContributor : IKubernetesContributor
{
    public void Contribute(KubernetesBundleBuilder bundle)
    {
        bundle.Add(new V1DeploymentBuilder()
            .WithMetadata(m => m
                .WithName("order-api")
                .WithNamespace("orders")
                .WithLabel("app.kubernetes.io/name", "order-api")
                .WithLabel("app.kubernetes.io/version", "1.4.2"))
            .WithSpec(s => s
                .WithReplicas(3)
                .WithSelector(sel => sel.MatchLabels(("app", "order-api")))
                .WithTemplate(t => t
                    .WithMetadata(m => m.WithLabel("app", "order-api"))
                    .WithSpec(ps => ps.WithContainer(c => c
                        .WithName("api")
                        .WithImage("ghcr.io/acme/order-api:1.4.2")
                        .WithPort(8080)
                        .WithResources(r => r
                            .WithRequests(Quantity.Cpu("100m"), Quantity.Memory("128Mi"))
                            .WithLimits  (Quantity.Cpu("500m"), Quantity.Memory("512Mi")))
                        .WithLivenessProbe(lp => lp
                            .WithHttpGet(h => h.WithPath("/healthz").WithPort(8080)))))))
            .Build().Value);

        bundle.Add(new V1ServiceBuilder()
            .WithMetadata(m => m
                .WithName("order-api")
                .WithNamespace("orders"))
            .WithSpec(s => s
                .WithSelector(("app", "order-api"))
                .WithPort(80, IntOrString.From(8080))
                .WithType(ServiceType.ClusterIP))
            .Build().Value);

        bundle.Add(new V1IngressBuilder()
            .WithMetadata(m => m
                .WithName("order-api")
                .WithNamespace("orders"))
            .WithSpec(s => s.WithRule(r => r
                .WithHost("orders.acme.com")
                .WithHttpPath("/api/v1/orders", "order-api", IntOrString.From(80))))
            .Build().Value);
    }
}

Three resources, ~40 lines, all type-checked, all version-pinned, all analyzer-validated. KUB040 (no resource limits), KUB041 (no liveness probe), KUB060 (selector mismatch) all run against this code at compile time. None fire because the code is correct.

Composition: many contributors → one bundle

// Program.cs
var contributors = new IKubernetesContributor[]
{
    new OrderApiContributor(),
    new OrderApiAutoscalingContributor(),  // adds the V2HPA
    new OrderApiObservabilityContributor(),// adds the ServiceMonitor
    new OrderApiNetworkPolicyContributor(),// adds the NetworkPolicy
    new OrderApiCertificateContributor(),  // adds the cert-manager Certificate
};

var builder = new KubernetesBundleBuilder { DefaultNamespace = "orders" };
foreach (var c in contributors.OrderBy(c => GetOrder(c)))
    c.Contribute(builder);

var bundle = builder.Build();

Each contributor adds its own slice. The order is configurable (the [KubernetesContributor.Order] attribute, or the user-provided sort), which matters for Kustomize emission where the resource list order is preserved in kustomization.yaml.

The bundle is now a flat list of ~10 typed objects. What you do with it depends on your team's deployment tooling.

Output 1: Multi-doc YAML

var yaml = KubernetesYamlWriter.WriteAll(bundle.Objects);
File.WriteAllText("manifests/orders.yaml", yaml);

Produces a single manifests/orders.yaml with --- separators between resources. Apply with kubectl apply -f manifests/orders.yaml. This is the simplest workflow and what most small teams will use.

Output 2: Helm chart templates/

KubernetesYamlWriter.WriteHelmChart(bundle, "charts/order-api/templates");

Produces:

charts/order-api/
└── templates/
    ├── deployment-order-api.yaml
    ├── service-order-api.yaml
    ├── ingress-order-api.yaml
    ├── horizontalpodautoscaler-order-api.yaml
    ├── servicemonitor-order-api.yaml
    ├── networkpolicy-order-api.yaml
    └── certificate-order-api.yaml

One YAML file per resource, named by kind-name. The user is expected to write the surrounding Chart.yaml and values.yaml themselves — or use a contributor that generates them as a side artifact.

Templating placeholders are not generated. If you want image: {{ .Values.image }} instead of a literal, write it as a literal string in the contributor and Helm will substitute at deploy time:

.WithImage("{{ .Values.image }}")

The C# compiler doesn't know it's a Helm placeholder, but the YAML writer emits the string verbatim and helm install substitutes it. This is the only place Kubernetes.Dsl knowingly produces "wrong" YAML — it's an explicit interop seam.

Output 3: Kustomize base/

KubernetesYamlWriter.WriteKustomizeBase(bundle, "k8s/orders/base");

Produces:

k8s/orders/base/
├── deployment-order-api.yaml
├── service-order-api.yaml
├── ingress-order-api.yaml
├── ...
└── kustomization.yaml
# k8s/orders/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - deployment-order-api.yaml
  - service-order-api.yaml
  - ingress-order-api.yaml
  - horizontalpodautoscaler-order-api.yaml
  - servicemonitor-order-api.yaml
  - networkpolicy-order-api.yaml
  - certificate-order-api.yaml

Drop this into your Kustomize tree and overlay it with environment-specific patches in k8s/orders/overlays/staging/, k8s/orders/overlays/production/. Kubernetes.Dsl owns the base; Kustomize owns the overlays.

Why Kubernetes.Dsl coexists with Helm and Kustomize

Direction Supported? How
Kubernetes.Dsl → Helm chart Yes KubernetesYamlWriter.WriteHelmChart(bundle, "templates/")
Kubernetes.Dsl → Kustomize base Yes KubernetesYamlWriter.WriteKustomizeBase(bundle, "base/")
Helm chart → Kubernetes.Dsl No Helm templates aren't valid YAML until rendered. Run helm template first, then ingest with KubernetesYamlReader.
Kustomize overlay → Kubernetes.Dsl No Same reasoning. Run kustomize build first.
Raw YAML → Kubernetes.Dsl Yes KubernetesYamlReader.ReadAll("bundle.yaml") returns IReadOnlyList<IKubernetesObject>

The thesis is that Kubernetes.Dsl is the typed authoring layer, and Helm/Kustomize are the deployment-time transformation layers. They're not competing for the same job.

Diagram
Kubernetes.Dsl owns everything left of the YAML; Helm and Kustomize own everything to its right — typed authoring stops where deployment-time templating begins.

Kubernetes.Dsl owns everything to the left of the YAML files. Helm and Kustomize own everything to the right.

DI registration

// Program.cs
var services = new ServiceCollection();

services.AddKubernetesDsl(opts =>
{
    opts.DefaultNamespace = "orders";
    opts.ContributorAssemblies = new[] { typeof(OrderApiContributor).Assembly };
});

var sp = services.BuildServiceProvider();
var bundle = sp.GetRequiredService<KubernetesBundleBuilder>();

foreach (var c in sp.GetServices<IKubernetesContributor>())
    c.Contribute(bundle);

var built = bundle.Build();

AddKubernetesDsl registers all IKubernetesContributor implementations from the configured assemblies, plus the bundle builder, plus the YAML writer. Standard ASP.NET Core / generic host patterns. No surprises.

Why a contributor pattern instead of a god-class

A single class that constructs the entire bundle works for small services but breaks down at ~10 resources. Contributors decompose the bundle by concern, not by resource:

  • One contributor for the application's runtime (Deployment, Service, ConfigMap)
  • One for observability (ServiceMonitor, PrometheusRule)
  • One for ingress (Ingress, Certificate)
  • One for security (NetworkPolicy, ServiceAccount, Role, RoleBinding)
  • One for autoscaling (HPA, VPA, KEDA ScaledObject)

Each contributor is independently testable, independently versioned (in separate files), and independently composable. A team can disable observability for a dev environment by skipping the observability contributor; they can swap a NetworkPolicy contributor for a different security model without touching the runtime contributor.

This is the same pattern that GitLab.Ci.Yaml's IGitLabCiContributor uses for pipeline composition. The pattern works at any scale.

What contributors do not do

  • Do not run at runtime in the cluster. They run at build time on a developer's machine or CI runner. The cluster only sees the resulting YAML.
  • Do not have side effects. No file I/O, no network calls, no logger calls. Pure data assembly.
  • Do not depend on each other. A contributor that needs another contributor's output should declare its own slice and let the bundle builder merge them. The bundle is the integration point.
  • Do not call analyzers. Analyzers run as Roslyn diagnostics on the C# source code that constructs the typed objects. The bundle is downstream of analyzer time.

The pattern is assemble, then validate, then emit. Contributors assemble. Analyzers validate (at compile time, not at runtime — by the time the contributor runs, the analyzer has already passed). The writer emits.


Previous: Part 11: Roslyn Analyzers — KUB001 through KUB099 Next: Part 13: Ops.Deployment Bridge — From Attributes to Typed Manifests

⬇ Download