kubectl as a BinaryWrapper Target
Track A is the typed kubectl client. It's completely independent of Track B (the schema-driven POCOs and builders), wraps the kubectl binary at a pinned version, and exposes a fluent C# API for apply, get, diff, explain, patch, delete. There is no apiserver in the loop at design time. There is no Kubernetes Go client. The whole thing is generated from kubectl --help recursively, the same way BinaryWrapper already wraps docker, packer, vagrant, and podman.
This chapter explains the recursive --help scrape, the Cobra parser reuse, the container runtime, the generated client surface, and the ~30-LOC bridge shim that lets you call client.ApplyAsync(typedPod) where the typed pod comes from Track B.
Why a wrapper instead of a Go-client port
Three options:
- Port the Kubernetes Go client. The official client is ~250k lines of Go, with code generation pipelines that depend on the Kubernetes build system. Porting it to .NET is a multi-year project and the result still has to be regenerated for every K8s minor.
- Hand-write HTTP calls against the apiserver. Doable but loses the version-aware behavior of
kubectl(which auto-negotiates API versions, handles strategic merge patches, knows how to authenticate against every cloud provider's auth shim, supports kubeconfig contexts, etc.). - Wrap
kubectl. Reuses the binary that's already pinned in the user's CI image and on every developer's machine. Inherits authentication, kubeconfig, context switching, exec plugins, every cloud provider's auth shim, all of it. Costs nothing to maintain. The wrapper just turnskubectl --helpoutput into typed C# methods.
Track A picks option 3 because the existing BinaryWrapper infrastructure (recursive --help scrape, Cobra parser, container runtime) already shipped for Docker, Packer, and friends. Wrapping kubectl is one config entry — no new code.
What BinaryWrapper already does (verified)
| Concern | Source | Notes |
|---|---|---|
Recursive --help scrape, depth 10 |
BinaryWrapper.Design.Lib/Code.cs HelpScraper.ScrapeNodeAsync lines 1734–1851 |
Walks kubectl <subcommand> --help recursively to build the command tree |
| Cobra parser | Code.cs CobraHelpParser lines 852–1091 |
Parses Cobra's standard --help format. Already handles Docker, Podman, Packer, Vagrant. kubectl is Cobra. |
| Multi-level subcommands | Vagrant/Packer use 2–3 levels via the recursive scraper | kubectl create deployment is depth 2; kubectl config set-context is depth 2 |
Global flags (--namespace, --context) |
Cobra parser handles Flags: and Global Flags: sections |
Both go onto the generated client constructor |
| Boolean vs value flags | OptionValueKind enum: Flag, Single, Multiple |
Built into Cobra parser |
Repeatable flags (-l a=b -l c=d) |
OptionValueKind.Multiple |
stringArray/stringSlice Cobra types |
| Positional/variadic arguments | ArgumentDefinition.IsVariadic |
Built into Cobra parser |
| Container runtime | ProcessRunnerContainerRuntime lines 1863–1938 |
Supports docker + podman, runs the CLI inside a container so no host kubectl needed |
| Generated client surface | See Packer/obj/Generated/.../PackerClient.g.cs for shape |
Fluent builder per command, version guards |
Conclusion: Track A is buildable today with zero changes to BinaryWrapper. The one open gap (dynamic resource plurals — kubectl get pods where pods is not in --help) is solved by Track B emitting KubectlResourceCatalog.g.cs from the x-kubernetes-group-version-kind extensions in the OpenAPI dump. The two tracks reinforce each other.
The introspection sequence
The introspection runs once when the K8s minor changes. The captured commandtree.json is checked into the repo (alongside schemas/), and the SG regenerates the typed client from it on every build. Zero network at build time. Zero apiserver. Zero kubectl running.
The generated KubectlClient
// <auto-generated/> via BinaryWrapper.SourceGenerator from kubectl 1.31 introspection
namespace Kubernetes.Dsl.Cli;
public sealed partial class KubectlClient : IKubernetesClient
{
private readonly IProcessRunner _runner;
private readonly KubectlClientOptions _options;
public KubectlClient(IProcessRunner runner, KubectlClientOptions options)
{
_runner = runner;
_options = options;
}
public Task<Result<ApplyOutput>> Apply(Action<KubectlApplyCommandBuilder> configure)
{
var builder = new KubectlApplyCommandBuilder(_options);
configure(builder);
return builder.RunAsync(_runner);
}
public Task<Result<DiffOutput>> Diff(Action<KubectlDiffCommandBuilder> configure)
{
var builder = new KubectlDiffCommandBuilder(_options);
configure(builder);
return builder.RunAsync(_runner);
}
public Task<Result<GetOutput>> Get(Action<KubectlGetCommandBuilder> configure) { /* ... */ }
public Task<Result<ExplainOutput>> Explain(Action<KubectlExplainCommandBuilder> configure) { /* ... */ }
public Task<Result<PatchOutput>> Patch(Action<KubectlPatchCommandBuilder> configure) { /* ... */ }
public Task<Result<DeleteOutput>> Delete(Action<KubectlDeleteCommandBuilder> configure) { /* ... */ }
// ~40 more commands captured from kubectl --help
}// <auto-generated/> via BinaryWrapper.SourceGenerator from kubectl 1.31 introspection
namespace Kubernetes.Dsl.Cli;
public sealed partial class KubectlClient : IKubernetesClient
{
private readonly IProcessRunner _runner;
private readonly KubectlClientOptions _options;
public KubectlClient(IProcessRunner runner, KubectlClientOptions options)
{
_runner = runner;
_options = options;
}
public Task<Result<ApplyOutput>> Apply(Action<KubectlApplyCommandBuilder> configure)
{
var builder = new KubectlApplyCommandBuilder(_options);
configure(builder);
return builder.RunAsync(_runner);
}
public Task<Result<DiffOutput>> Diff(Action<KubectlDiffCommandBuilder> configure)
{
var builder = new KubectlDiffCommandBuilder(_options);
configure(builder);
return builder.RunAsync(_runner);
}
public Task<Result<GetOutput>> Get(Action<KubectlGetCommandBuilder> configure) { /* ... */ }
public Task<Result<ExplainOutput>> Explain(Action<KubectlExplainCommandBuilder> configure) { /* ... */ }
public Task<Result<PatchOutput>> Patch(Action<KubectlPatchCommandBuilder> configure) { /* ... */ }
public Task<Result<DeleteOutput>> Delete(Action<KubectlDeleteCommandBuilder> configure) { /* ... */ }
// ~40 more commands captured from kubectl --help
}Each command builder mirrors the flags and arguments from kubectl <command> --help:
public sealed partial class KubectlApplyCommandBuilder
{
public KubectlApplyCommandBuilder WithFilename(string path) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithKustomize(string dir) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithRecursive(bool recursive = true) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithServerSide(bool ssa = true) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithFieldManager(string manager) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithForce(bool force = true) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithDryRun(string mode) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithNamespace(string ns) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithContext(string context) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithKubeconfig(string path) { /* ... */ return this; }
public Task<Result<ApplyOutput>> RunAsync(IProcessRunner runner) { /* ... */ }
}public sealed partial class KubectlApplyCommandBuilder
{
public KubectlApplyCommandBuilder WithFilename(string path) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithKustomize(string dir) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithRecursive(bool recursive = true) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithServerSide(bool ssa = true) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithFieldManager(string manager) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithForce(bool force = true) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithDryRun(string mode) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithNamespace(string ns) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithContext(string context) { /* ... */ return this; }
public KubectlApplyCommandBuilder WithKubeconfig(string path) { /* ... */ return this; }
public Task<Result<ApplyOutput>> RunAsync(IProcessRunner runner) { /* ... */ }
}Every flag is captured from kubectl apply --help. The WithFilename, WithRecursive, WithServerSide etc. come from the Flags: section. The WithNamespace, WithContext, WithKubeconfig come from the Global Flags: section. The Cobra parser doesn't care which command — it just walks the help text.
Server-side apply by default
Modern Kubernetes manages object ownership through server-side apply with a field manager. This is what lets multiple controllers (yours, Helm's, Flux's, etc.) coexist on the same object without stomping each other's fields. Track A defaults to SSA:
public sealed record KubectlClientOptions(
string FieldManager = "kubernetes-dsl",
bool ServerSideApply = true,
bool Force = false,
string? DryRun = null);public sealed record KubectlClientOptions(
string FieldManager = "kubernetes-dsl",
bool ServerSideApply = true,
bool Force = false,
string? DryRun = null);The generated KubectlApplyCommandBuilder uses these defaults unless the caller overrides them.
var client = new KubectlClient(new ProcessRunner(), new KubectlClientOptions());
var result = await client.Apply(b => b
.WithFilename("manifests/order-api.yaml")
.WithRecursive(true));
// Equivalent to: kubectl apply -f manifests/order-api.yaml --recursive --server-side --field-manager=kubernetes-dslvar client = new KubectlClient(new ProcessRunner(), new KubectlClientOptions());
var result = await client.Apply(b => b
.WithFilename("manifests/order-api.yaml")
.WithRecursive(true));
// Equivalent to: kubectl apply -f manifests/order-api.yaml --recursive --server-side --field-manager=kubernetes-dslStrategic-merge vs JSON-merge vs SSA distinctions are handled by kubectl itself; the wrapper doesn't reimplement them. Users who want JSON merge call client.Patch(b => b.WithType(PatchType.JsonMerge)).
The bridge shim — where Track A meets Track B
Track A produces a typed kubectl client. Track B produces typed IKubernetesObject POCOs. The handshake between them is one hand-written ~30-LOC shim:
// Hand-written. Lives in Kubernetes.Dsl.Cli/KubectlClientExtensions.cs.
// This is the ONLY place the two tracks meet at runtime.
namespace Kubernetes.Dsl.Cli;
public static class KubectlClientExtensions
{
public static async Task<Result<ApplyOutput>> ApplyAsync(
this KubectlClient client, // Track A: BinaryWrapper-generated
IKubernetesObject obj, // Track B: OpenApiV3SchemaEmitter-generated
ApplyOptions? opts = null,
CancellationToken ct = default)
{
var yaml = KubernetesYamlWriter.Write(obj); // shared serializer (Part 7)
using var temp = TempFile.WriteAll(yaml);
return await client.Apply(b => b // Track A's generated builder
.WithFilename(temp.Path)
.WithServerSide(opts?.ServerSideApply ?? true)
.WithFieldManager(opts?.FieldManager ?? "kubernetes-dsl"));
}
public static async Task<Result<T>> GetAsync<T>(
this KubectlClient client,
string @namespace,
string name,
CancellationToken ct = default)
where T : IKubernetesObject
{
var meta = typeof(T).GetCustomAttribute<KubernetesResourceAttribute>()
?? throw new InvalidOperationException($"{typeof(T).Name} has no KubernetesResource attribute");
var result = await client.Get(b => b
.WithResourceType(meta.Kind.ToLowerInvariant())
.WithName(name)
.WithNamespace(@namespace)
.WithOutput("yaml"));
if (result.IsFailure) return Result.Fail<T>(result.Error);
return Result.Ok(KubernetesYamlReader.Read<T>(result.Value.Stdout));
}
public static async Task<Result<DiffOutput>> DiffAsync(
this KubectlClient client,
IKubernetesObject obj,
CancellationToken ct = default)
{
var yaml = KubernetesYamlWriter.Write(obj);
using var temp = TempFile.WriteAll(yaml);
return await client.Diff(b => b.WithFilename(temp.Path));
}
public static async Task<Result<DeleteOutput>> DeleteAsync(
this KubectlClient client,
IKubernetesObject obj,
CancellationToken ct = default)
{
var yaml = KubernetesYamlWriter.Write(obj);
using var temp = TempFile.WriteAll(yaml);
return await client.Delete(b => b.WithFilename(temp.Path));
}
}// Hand-written. Lives in Kubernetes.Dsl.Cli/KubectlClientExtensions.cs.
// This is the ONLY place the two tracks meet at runtime.
namespace Kubernetes.Dsl.Cli;
public static class KubectlClientExtensions
{
public static async Task<Result<ApplyOutput>> ApplyAsync(
this KubectlClient client, // Track A: BinaryWrapper-generated
IKubernetesObject obj, // Track B: OpenApiV3SchemaEmitter-generated
ApplyOptions? opts = null,
CancellationToken ct = default)
{
var yaml = KubernetesYamlWriter.Write(obj); // shared serializer (Part 7)
using var temp = TempFile.WriteAll(yaml);
return await client.Apply(b => b // Track A's generated builder
.WithFilename(temp.Path)
.WithServerSide(opts?.ServerSideApply ?? true)
.WithFieldManager(opts?.FieldManager ?? "kubernetes-dsl"));
}
public static async Task<Result<T>> GetAsync<T>(
this KubectlClient client,
string @namespace,
string name,
CancellationToken ct = default)
where T : IKubernetesObject
{
var meta = typeof(T).GetCustomAttribute<KubernetesResourceAttribute>()
?? throw new InvalidOperationException($"{typeof(T).Name} has no KubernetesResource attribute");
var result = await client.Get(b => b
.WithResourceType(meta.Kind.ToLowerInvariant())
.WithName(name)
.WithNamespace(@namespace)
.WithOutput("yaml"));
if (result.IsFailure) return Result.Fail<T>(result.Error);
return Result.Ok(KubernetesYamlReader.Read<T>(result.Value.Stdout));
}
public static async Task<Result<DiffOutput>> DiffAsync(
this KubectlClient client,
IKubernetesObject obj,
CancellationToken ct = default)
{
var yaml = KubernetesYamlWriter.Write(obj);
using var temp = TempFile.WriteAll(yaml);
return await client.Diff(b => b.WithFilename(temp.Path));
}
public static async Task<Result<DeleteOutput>> DeleteAsync(
this KubectlClient client,
IKubernetesObject obj,
CancellationToken ct = default)
{
var yaml = KubernetesYamlWriter.Write(obj);
using var temp = TempFile.WriteAll(yaml);
return await client.Delete(b => b.WithFilename(temp.Path));
}
}That's the entire shim. ~30 LOC. No source generation. No Roslyn. No magic. Calls Track A's generated builders and Track B's shared serializer.
Usage:
var pod = new V1PodBuilder() // Track B
.WithMetadata(m => m.WithName("order-api").WithNamespace("orders"))
.WithSpec(s => s.WithContainer(c => c.WithName("api").WithImage("ghcr.io/acme/order-api:1.4.2")))
.Build().Value;
var client = new KubectlClient(new ProcessRunner(), new KubectlClientOptions());
var apply = await client.ApplyAsync(pod); // Bridge shim
if (apply.IsFailure) Console.WriteLine($"Apply failed: {apply.Error}");
var fetched = await client.GetAsync<V1Pod>("orders", "order-api");
Console.WriteLine($"Phase: {fetched.Value.Status?.Phase}");var pod = new V1PodBuilder() // Track B
.WithMetadata(m => m.WithName("order-api").WithNamespace("orders"))
.WithSpec(s => s.WithContainer(c => c.WithName("api").WithImage("ghcr.io/acme/order-api:1.4.2")))
.Build().Value;
var client = new KubectlClient(new ProcessRunner(), new KubectlClientOptions());
var apply = await client.ApplyAsync(pod); // Bridge shim
if (apply.IsFailure) Console.WriteLine($"Apply failed: {apply.Error}");
var fetched = await client.GetAsync<V1Pod>("orders", "order-api");
Console.WriteLine($"Phase: {fetched.Value.Status?.Phase}");kubectl plugin support
kubectl-argo-rollouts, kubectl-istioctl, kubectl-cert-manager, kubectl-neat — all are separate binaries with their own --help trees. Track A wraps each one with the same BinaryWrapper recipe. No code changes.
# Introspect kubectl-argo-rollouts the same way
dotnet run --project Kubernetes.Dsl.Design -- introspect --binary kubectl-argo-rollouts --container kubectl-argo-rollouts:1.7.2# Introspect kubectl-argo-rollouts the same way
dotnet run --project Kubernetes.Dsl.Design -- introspect --binary kubectl-argo-rollouts --container kubectl-argo-rollouts:1.7.2The SG generates a KubectlArgoRolloutsClient.g.cs next to KubectlClient.g.cs. Same fluent builder shape. Same flags-to-methods mapping. Same Result<T> error handling.
var rolloutsClient = new KubectlArgoRolloutsClient(new ProcessRunner());
await rolloutsClient.Promote(b => b.WithName("order-api").WithNamespace("orders"));
await rolloutsClient.Status(b => b.WithName("order-api").WithWatch(true));var rolloutsClient = new KubectlArgoRolloutsClient(new ProcessRunner());
await rolloutsClient.Promote(b => b.WithName("order-api").WithNamespace("orders"));
await rolloutsClient.Status(b => b.WithName("order-api").WithWatch(true));This is one of the strongest arguments for the wrapper approach: every kubectl plugin gets typed for free, with no per-plugin code in Kubernetes.Dsl. Adding a new plugin is one introspection command.
What Track A does not do
- Does not parse kubectl output formats.
kubectl get -o yamlreturns YAML; the bridge shim hands it toKubernetesYamlReader. Other output formats (-o json,-o jsonpath,-o custom-columns) are returned as raw strings. - Does not implement port-forward, exec, or attach. These are interactive streams; wrapping them via
--helpintrospection is the wrong tool. Users who need them call intoKubernetesClient/csharpfor those specific operations. - Does not implement the watch protocol. Same reason.
kubectl get --watchworks (the wrapper streams stdout), but the long-running connection management belongs in the runtime client library, not in Kubernetes.Dsl.
Why this is a wrapper, not a port
The honest framing: Track A is kubectl with a typed face. Every operation that Track A exposes is something kubectl already does. The wrapper adds type safety, flag autocomplete, and the ability to construct manifests in C# instead of YAML strings. It does not add new capability.
This is exactly the right scope for the dev-side framing. A Kubernetes.Dsl user is authoring manifests, checking them into git, and applying them with kubectl apply. Track A is the typed shortcut for the apply step. It doesn't need to be more than that.
For the runtime side — operators, controllers, reconcilers — KubernetesClient/csharp exists and is excellent. Track A and KubernetesClient/csharp solve different problems and should be used in parallel by different parts of a codebase.
Previous: Part 9: CRDs as First-Class Citizens Next: Part 11: Roslyn Analyzers — KUB001 through KUB099