Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Part 15: Talking to the Docker CLI

"Every Docker SDK eventually drifts from the Docker CLI. The CLI is the source of truth. Wrap it."


Why

HomeLab needs to talk to Docker. Specifically, it needs to:

  1. Run Docker Compose stacks against a remote Docker daemon (the one inside a Vagrant VM)
  2. Inspect container state (running, healthy, exited)
  3. Pull images, push images, list images
  4. Create networks, volumes, secrets
  5. Execute one-off commands inside containers

The .NET ecosystem has a library for this: Docker.DotNet. It is a wrapper around the Docker REST API. It is well-maintained. It is also wrong for HomeLab, for one specific reason: the Docker CLI is the source of truth, not the Docker REST API. Every Docker release ships CLI changes; the API is updated some quarters later, sometimes inconsistently. Docker Compose v2 is a CLI plugin, not an API surface. docker buildx is a CLI plugin, not an API surface. docker context (which we use for routing to remote VMs) is a CLI concept, not an API concept. If you wrap the API, you eventually have to drop down to Process.Start("docker", ...) for half of what you actually want to do, and the abstraction leaks.

The thesis of this part is: wrap the CLI directly, with a typed source-generated wrapper, and never look back. The CLI surface changes slowly, the help text is parseable, exit codes are well-defined, and stdout is documented. Wrapping the CLI gives HomeLab a contract that exactly matches what docker does, including future flags we have not seen yet.

The library that does this is FrenchExDev.Net.Docker, built on the [BinaryWrapper] source generator from FrenchExDev.Net.BinaryWrapper. We saw the wrapper pattern in Part 11; this part is the deep dive on the Docker-specific wrapping.


The shape

[BinaryWrapper("docker", HelpCommand = "--help", VersionCommand = "version --format json")]
public partial class DockerClient : IDockerClient
{
    [Command("ps")]
    public partial Task<Result<DockerPsOutput>> PsAsync(
        [Flag("--all", Aliases = "-a")] bool all = false,
        [Flag("--filter")] IReadOnlyList<string>? filter = null,
        [Flag("--format")] string? format = null,
        CancellationToken ct = default);

    [Command("pull")]
    public partial Task<Result<DockerPullOutput>> PullAsync(
        [PositionalArgument] string image,
        [Flag("--platform")] string? platform = null,
        CancellationToken ct = default);

    [Command("run")]
    public partial Task<Result<DockerRunOutput>> RunAsync(
        [PositionalArgument(Position = 0)] string image,
        [PositionalArgument(Position = 1, IsList = true)] IReadOnlyList<string>? command = null,
        [Flag("--name")] string? name = null,
        [Flag("-d", IsBoolean = true)] bool detach = false,
        [Flag("--rm")] bool remove = false,
        [Flag("-e", IsKeyValue = true)] IReadOnlyDictionary<string, string>? env = null,
        [Flag("-v", IsList = true)] IReadOnlyList<string>? volume = null,
        [Flag("-p", IsList = true)] IReadOnlyList<string>? port = null,
        [Flag("--network")] string? network = null,
        [Flag("--restart")] string? restart = null,
        CancellationToken ct = default);

    [Command("exec")]
    public partial Task<Result<DockerExecOutput>> ExecAsync(
        [PositionalArgument(Position = 0)] string container,
        [PositionalArgument(Position = 1, IsList = true)] IReadOnlyList<string> command,
        [Flag("-i", IsBoolean = true)] bool interactive = false,
        [Flag("-t", IsBoolean = true)] bool tty = false,
        CancellationToken ct = default);

    [Command("inspect")]
    public partial Task<Result<DockerInspectOutput>> InspectAsync(
        [PositionalArgument] string nameOrId,
        [Flag("--format")] string? format = null,
        CancellationToken ct = default);

    [Command("network", SubCommand = "create")]
    public partial Task<Result<DockerNetworkCreateOutput>> NetworkCreateAsync(
        [PositionalArgument] string name,
        [Flag("--driver")] string? driver = null,
        [Flag("--subnet")] string? subnet = null,
        CancellationToken ct = default);

    [Command("volume", SubCommand = "ls")]
    public partial Task<Result<DockerVolumeListOutput>> VolumeListAsync(
        [Flag("--format")] string? format = null,
        CancellationToken ct = default);

    [Command("version")]
    public partial Task<Result<DockerVersionOutput>> VersionAsync(
        [Flag("--format")] string format = "json",
        CancellationToken ct = default);
}

That's a fragment. The full DockerClient covers about 80 commands and sub-commands. The user does not write any of the implementations — the [BinaryWrapper] source generator emits them at compile time.

The generator does five things for each [Command]-decorated method:

  1. Validates the help text. At source-generation time, the generator runs docker <command> --help and checks that every [Flag] declared in the partial method actually exists in the help output. If a flag is missing or renamed, the build fails with a diagnostic. This is the first defence against drift: a Docker upgrade that removes a flag breaks compilation, not runtime.
  2. Generates the argument-list assembly. Each [Flag] becomes a piece of code that conditionally appends to a List<string> of arguments. IsBoolean flags emit only when true. IsKeyValue flags emit KEY=VALUE pairs. IsList flags emit one occurrence per element.
  3. Generates the process invocation. The generator emits a call to a shared BinaryRunner helper that handles ProcessStartInfo, stdout/stderr capture, exit-code handling, cancellation, and timeouts.
  4. Generates the result parser. Each command has a corresponding *Output record (e.g. DockerPsOutput). The generator parses the command's stdout based on the requested --format. For JSON outputs, it deserialises into the typed record. For text outputs, it uses a small parser that the wrapper author provides as a partial method.
  5. Wraps the call in Result<T>. Exit code 0 → Result.Success(parsed). Non-zero → Result.Failure<T>(exitCode, stderr).

The wiring

DockerClient is [Injectable(ServiceLifetime.Singleton)]. It is registered in the composition root automatically. It is consumed by:

  • IDockerComposeClient (the next part) — for compose operations
  • The Apply stage of the pipeline — for one-off docker run and docker exec
  • The Verify stage of the pipeline — for docker inspect-based health checks
  • The IComposeFileContributor plugins (some of them) — for inspecting an already-running container before deciding what to add

Routing to a remote Docker daemon

The Vagrant VM that hosts a HomeLab service exposes Docker on tcp://VM_IP:2375. The HomeLab CLI is on the host. To run docker compose up against the VM's Docker, we set DOCKER_HOST for the duration of the call:

[Injectable(ServiceLifetime.Scoped)]   // ← scoped: one per pipeline run, with the right env
public sealed class RemoteDockerClient : IDockerClient
{
    private readonly DockerClient _inner;
    private readonly Uri _dockerHost;

    public RemoteDockerClient(DockerClient inner, IRemoteDockerEndpointResolver resolver, HomeLabContext ctx)
    {
        _inner = inner;
        _dockerHost = resolver.Resolve(ctx);  // tcp://192.168.56.10:2375
    }

    public Task<Result<DockerPsOutput>> PsAsync(...) => _inner.PsAsync(...).WithEnv("DOCKER_HOST", _dockerHost.ToString());
    // ... etc
}

The WithEnv extension is part of the BinaryRunner: it sets an environment variable for the next invocation only. The host's Docker is not affected. Tests use a LocalDockerClient that does not set DOCKER_HOST and instead talks to a Docker-in-Docker test fixture.

For HA topology with multiple Docker hosts across VMs, the resolver returns a list of endpoints, and the Apply stage iterates: each compose stack has a target VM, and the resolver returns the right endpoint for that target.

TLS for the remote Docker socket

tcp://VM:2375 is plain text. For production-like setups, HomeLab supports tcp://VM:2376 with mTLS, generated by the same Tls library that issues the wildcard cert. The wiring is one extra WithEnv("DOCKER_TLS_VERIFY", "1") and WithEnv("DOCKER_CERT_PATH", "/path/to/certs"). The certs are in the IFileSystem, sandboxed to the lab's working directory. We see this in Part 28.


The test

public sealed class DockerClientTests
{
    [Fact]
    public async Task ps_with_all_flag_emits_correct_arguments()
    {
        var runner = new RecordingBinaryRunner();
        var client = new DockerClient(runner);

        await client.PsAsync(all: true);

        runner.LastCommand.Should().Be("docker");
        runner.LastArgs.Should().Equal("ps", "--all");
    }

    [Fact]
    public async Task ps_with_filter_emits_one_filter_per_value()
    {
        var runner = new RecordingBinaryRunner();
        var client = new DockerClient(runner);

        await client.PsAsync(filter: new[] { "status=running", "name=gitlab" });

        runner.LastArgs.Should().Equal("ps", "--filter", "status=running", "--filter", "name=gitlab");
    }

    [Fact]
    public async Task pull_returns_failure_on_nonzero_exit()
    {
        var runner = new ScriptedBinaryRunner();
        runner.Script(exitCode: 1, stderr: "Error response from daemon: pull access denied");
        var client = new DockerClient(runner);

        var result = await client.PullAsync("private/image");

        result.IsFailure.Should().BeTrue();
        result.Errors.Should().Contain(e => e.Contains("pull access denied"));
    }

    [Fact]
    public async Task version_parses_json_output_into_typed_record()
    {
        var runner = new ScriptedBinaryRunner();
        runner.Script(exitCode: 0, stdout: """
            {"Client":{"Version":"26.1.4","ApiVersion":"1.45"},"Server":{"Version":"26.1.4","ApiVersion":"1.45"}}
            """);
        var client = new DockerClient(runner);

        var result = await client.VersionAsync();

        result.IsSuccess.Should().BeTrue();
        result.Value.Client.Version.Should().Be("26.1.4");
        result.Value.Server.ApiVersion.Should().Be("1.45");
    }
}

Four tests, all millisecond-scale, all using a fake IBinaryRunner. No real docker invocation. The architecture tests separately ensure that no class outside the wrapper directly spawns a docker process.


What this gives you that bash doesn't

A bash script calling Docker is literally docker run strings interpolated into the script. Every flag is a string. Every typo is a runtime error. Every exit code is $?. Every stdout parse is awk or jq. Every retry is a hand-rolled until loop. Every test is "I ran it and it worked".

A typed source-generated Docker wrapper gives you, for the same surface area:

  • Compile-time validation that every flag exists in the installed docker --help
  • Typed parameters with bool for boolean flags, IReadOnlyDictionary<string,string> for -e KEY=VALUE, IReadOnlyList<string> for repeatable flags
  • Typed return values parsed from JSON output into records
  • Result<T> for exit codes instead of if [[ $? -ne 0 ]]
  • Cancellation tokens that actually kill the child process
  • Per-invocation environment overrides for DOCKER_HOST and DOCKER_TLS_VERIFY
  • Test fakes via IBinaryRunner so unit tests run in milliseconds without spawning processes

The bargain pays back the first time Docker ships a CLI breaking change: the build fails at compile time on the affected wrapper method, the upgrade is one diff in one wrapper file, and the rest of HomeLab does not change.


⬇ Download