Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Part XII: From ComposeFile to docker-compose.yml

Typed C# in, valid YAML out -- round-trip tested against 32 schema versions.


The Last Mile

Part X and Part XI built the type system: 32 schemas merged into one set of C# classes. But types alone don't deploy containers. This part shows how ComposeFile objects become real docker-compose.yml files -- and how the developer loop of change-compile-diff makes infrastructure changes feel like code changes.

All generated. All typed. All useless until I could turn a ComposeFile instance into a valid YAML string that docker compose up would actually accept. The serialization problem sounds simple. It is not. Six union type patterns (string shorthand vs. full object), x-* extension fields that must appear at the parent level, snake_case naming against PascalCase properties, and 67 nullable properties per service that must serialize only when set.

This post walks through the serialization pipeline, the type converters, the round-trip tests, and the developer workflow.


The Serialization Pipeline

The pipeline has five stages, each addressing a specific concern. A ComposeFile C# object enters stage one. A valid docker-compose.yml file exits stage five. Nothing in between is manual.

Diagram
The five-stage YamlDotNet pipeline that turns a typed ComposeFile into a docker-compose.yml — naming, union converters, null omission and extensions merging compose into a single thread-safe ISerializer built once at startup.

Each stage is a YamlDotNet feature -- naming convention, type converter, default value handler, or custom emitter. The composition happens once, at serializer construction, and the resulting ISerializer is thread-safe and reusable.

Here is the full serializer setup:

public static class ComposeFileSerializer
{
    private static readonly ISerializer Serializer = new SerializerBuilder()
        .WithNamingConvention(UnderscoredNamingConvention.Instance) // PascalCase -> snake_case
        .WithTypeConverter(new ComposeServiceBuildConfigConverter())
        .WithTypeConverter(new ComposeServicePortsConfigConverter())
        .WithTypeConverter(new ComposeServiceVolumesConfigConverter())
        .WithTypeConverter(new ComposeServiceDependsOnConverter())
        .WithTypeConverter(new ComposeServiceDeployConfigConverter())
        .WithTypeConverter(new DictionaryExtensionsConverter())
        .ConfigureDefaultValuesHandling(DefaultValuesHandling.OmitNull)
        .Build();

    public static string Serialize(ComposeFile file)
    {
        return Serializer.Serialize(file);
    }

    public static async Task WriteAsync(ComposeFile file, string path)
    {
        var yaml = Serialize(file);
        await File.WriteAllTextAsync(path, yaml);
    }
}

Six type converters, one naming convention, one null-omission flag. That is the entire configuration. The serializer is a static readonly field -- constructed once, used everywhere. Thread-safe because YamlDotNet serializers are immutable after construction.

Let me take each stage in order.


Naming Convention: PascalCase to snake_case

C# conventions dictate PascalCase for public properties. The compose specification dictates snake_case for YAML keys. UnderscoredNamingConvention.Instance handles the mapping automatically:

// C# property name       -> YAML key
// Image                  -> image
// ContainerName          -> container_name
// DockerfileInline       -> dockerfile_inline
// MemReservation         -> mem_reservation
// CpuPercent             -> cpu_percent
// OomScoreAdj            -> oom_score_adj

This is the simplest stage and the one I worried about least. YamlDotNet's underscore convention handles the standard cases correctly. But there are edge cases.

Edge cases and regression tests

The convention splits on uppercase boundaries: OomScoreAdj becomes oom_score_adj, which happens to match the compose spec. If a future field used DNSConfig, the convention would produce d_n_s_config (wrong). I checked all 67 ComposeService properties -- every one maps correctly. But I wrote a regression test:

[Theory]
[InlineData(nameof(ComposeService.Image), "image")]
[InlineData(nameof(ComposeService.ContainerName), "container_name")]
[InlineData(nameof(ComposeService.DockerfileInline), "dockerfile_inline")]
[InlineData(nameof(ComposeService.MemReservation), "mem_reservation")]
[InlineData(nameof(ComposeService.CpuPercent), "cpu_percent")]
[InlineData(nameof(ComposeService.OomScoreAdj), "oom_score_adj")]
[InlineData(nameof(ComposeService.DependsOn), "depends_on")]
[InlineData(nameof(ComposeService.StopGracePeriod), "stop_grace_period")]
public void NamingConvention_MapsCorrectly(string csharpName, string expectedYaml)
{
    var convention = UnderscoredNamingConvention.Instance;
    Assert.Equal(expectedYaml, convention.Apply(csharpName));
}

If a future compose spec version introduces a field with an acronym that breaks the convention, this test will catch it immediately. The fix would be a [YamlMember(Alias = "correct_name")] attribute on the affected property -- generated by the source generator, not hand-written.


Null-Property Omission

This is the stage that makes the output usable. A ComposeService has 67 properties. A typical service sets maybe 5 of them. Without null omission, the YAML looks like this:

# Without null omission -- 67 properties, 62 of them null
services:
  web:
    image: nginx:latest
    build: null
    command: null
    entrypoint: null
    environment: null
    ports: null
    volumes: null
    # ... 60 more null properties

# With null omission -- only set properties
services:
  web:
    image: nginx:latest

Two lines instead of sixty-seven. This is the YAML a human would write. This is what docker compose config produces when you feed it a minimal service definition. And this is what makes the diff workflow practical -- when you change one property, the diff shows one changed line, not one changed line buried in sixty-six unchanged null properties.

The configuration is a single line:

.ConfigureDefaultValuesHandling(DefaultValuesHandling.OmitNull)

YamlDotNet checks each property value before serialization. If it is null, it skips the property entirely. No key emitted, no value emitted. For reference types this is straightforward. For value types (bool, int), the property must be declared as bool? / int? in the generated model -- which the source generator already does, because the compose spec marks almost every field as optional.


Custom Type Converters for Union Types

This is the hard part. The compose specification defines several fields as union types -- a value that can be either a simple scalar (string, number) or a full object. The YAML shorthand makes compose files readable. The type converter makes it automatic.

Diagram
How the custom IYamlTypeConverters decide, per property, whether to emit the human-friendly YAML shorthand or the full object form — so build: . and "8080:80" come out of the generator automatically rather than requiring every field to be spelled out.

There are six union type patterns in the compose specification. Each needs a custom IYamlTypeConverter. I will walk through the three most interesting ones in detail.

The Build Converter

The build field can be a string (just the context directory) or an object with context, dockerfile, args, target, and more. The converter decides at serialization time:

public sealed class ComposeServiceBuildConfigConverter : IYamlTypeConverter
{
    public bool Accepts(Type type) => type == typeof(ComposeServiceBuildConfig);

    public object? ReadYaml(IParser parser, Type type, ObjectDeserializer deserializer)
    {
        if (parser.TryConsume<Scalar>(out var scalar))
        {
            // String shorthand: "build: ." means context = "."
            return new ComposeServiceBuildConfig { Context = scalar.Value };
        }
        // Full object: delegate to default deserialization
        return deserializer.Invoke(type);
    }

    public void WriteYaml(IEmitter emitter, object? value, Type type, ObjectSerializer serializer)
    {
        if (value is not ComposeServiceBuildConfig build) return;

        if (IsStringShorthand(build))
        {
            emitter.Emit(new Scalar(build.Context!));
            return;
        }

        // Full mapping: emit each non-null property as a key-value pair
        emitter.Emit(new MappingStart());
        EmitIfNotNull(emitter, "context", build.Context);
        EmitIfNotNull(emitter, "dockerfile", build.Dockerfile);
        EmitIfNotNull(emitter, "dockerfile_inline", build.DockerfileInline);
        EmitIfNotNull(emitter, serializer, "args", build.Args);
        EmitIfNotNull(emitter, "target", build.Target);
        EmitIfNotNull(emitter, serializer, "cache_from", build.CacheFrom);
        EmitIfNotNull(emitter, serializer, "labels", build.Labels);
        EmitIfNotNull(emitter, "network", build.Network);
        EmitIfNotNull(emitter, "shm_size", build.ShmSize);
        DictionaryExtensionsConverter.EmitExtensions(emitter, serializer, build.Extensions);
        emitter.Emit(new MappingEnd());
    }

    private static bool IsStringShorthand(ComposeServiceBuildConfig build)
    {
        return build.Context is not null &&
               build.Dockerfile is null &&
               build.DockerfileInline is null &&
               build.Args is null &&
               build.Target is null &&
               build.CacheFrom is null &&
               build.CacheTo is null &&
               build.Labels is null &&
               build.Network is null &&
               build.ShmSize is null &&
               build.Extensions is null or { Count: 0 };
    }
}

The IsStringShorthand method checks every property except Context. If any of them is set, the full object form is emitted. If only Context is set, it emits a bare scalar. This matches the compose specification exactly: build: . is shorthand for build: { context: . }.

Why manual property emission instead of delegating to serializer.SerializeValue for the whole object? Because the default serializer would emit all properties including nulls (the OmitNull handler does not apply inside custom converters), and because we need fine-grained control over property ordering and extension field placement. The trade-off is more code in the converter, but predictable output.

The Ports Converter

Ports are the most common union type in compose files. The short form is ubiquitous:

ports:
  - "80:80"
  - "443:443"
  - "8080:80"

The long form is less common but necessary for advanced scenarios:

ports:
  - target: 80
    published: "8080"
    protocol: tcp
    mode: host
  - target: 443
    published: "443"
    protocol: tcp

The converter:

public sealed class ComposeServicePortsConfigConverter : IYamlTypeConverter
{
    public bool Accepts(Type type) => type == typeof(ComposeServicePortsConfig);

    public object? ReadYaml(IParser parser, Type type, ObjectDeserializer deserializer)
    {
        if (parser.TryConsume<Scalar>(out var scalar))
        {
            return ComposeServicePortsConfig.Parse(scalar.Value);
        }
        return deserializer.Invoke(type);
    }

    public void WriteYaml(IEmitter emitter, object? value, Type type, ObjectSerializer serializer)
    {
        if (value is not ComposeServicePortsConfig ports) return;

        if (IsStringShorthand(ports))
        {
            // Emit short form: "published:target" or "published:target/protocol"
            var shorthand = ports.Protocol is null or "tcp"
                ? $"{ports.Published}:{ports.Target}"
                : $"{ports.Published}:{ports.Target}/{ports.Protocol}";
            emitter.Emit(new Scalar(shorthand));
            return;
        }

        // Long form
        emitter.Emit(new MappingStart());

        emitter.Emit(new Scalar("target"));
        emitter.Emit(new Scalar(ports.Target.ToString()!));

        if (ports.Published is not null)
        {
            emitter.Emit(new Scalar("published"));
            emitter.Emit(new Scalar(ports.Published));
        }

        if (ports.Protocol is not null)
        {
            emitter.Emit(new Scalar("protocol"));
            emitter.Emit(new Scalar(ports.Protocol));
        }

        if (ports.HostIp is not null)
        {
            emitter.Emit(new Scalar("host_ip"));
            emitter.Emit(new Scalar(ports.HostIp));
        }

        if (ports.Mode is not null)
        {
            emitter.Emit(new Scalar("mode"));
            emitter.Emit(new Scalar(ports.Mode));
        }

        if (ports.AppProtocol is not null)
        {
            emitter.Emit(new Scalar("app_protocol"));
            emitter.Emit(new Scalar(ports.AppProtocol));
        }

        emitter.Emit(new MappingEnd());
    }

    private static bool IsStringShorthand(ComposeServicePortsConfig ports)
    {
        return ports.Target is not null &&
               ports.Published is not null &&
               ports.HostIp is null &&
               ports.Mode is null &&
               ports.AppProtocol is null;
    }
}

The Parse method handles the string grammar: "8080:80" becomes { Published = "8080", Target = 80 }, "8080:80/udp" adds Protocol = "udp", "127.0.0.1:8080:80" adds HostIp = "127.0.0.1". If only Target and Published are set, use the short form. If HostIp, Mode, or AppProtocol are present, use the long form.

The Volumes Converter

Volumes follow the same pattern as ports and build -- a string shorthand for simple cases, an object for complex ones:

public sealed class ComposeServiceVolumesConfigConverter : IYamlTypeConverter
{
    public bool Accepts(Type type) => type == typeof(ComposeServiceVolumesConfig);

    public void WriteYaml(IEmitter emitter, object? value, Type type, ObjectSerializer serializer)
    {
        if (value is not ComposeServiceVolumesConfig vol) return;

        if (IsStringShorthand(vol))
        {
            var shorthand = vol.ReadOnly == true
                ? $"{vol.Source}:{vol.Target}:ro"
                : $"{vol.Source}:{vol.Target}";
            emitter.Emit(new Scalar(shorthand));
            return;
        }

        // Long form: type, source, target, read_only, bind/volume/tmpfs config
        emitter.Emit(new MappingStart());
        EmitIfNotNull(emitter, "type", vol.Type ?? "volume");
        EmitIfNotNull(emitter, "source", vol.Source);
        EmitIfNotNull(emitter, "target", vol.Target);
        if (vol.ReadOnly == true) { /* emit read_only: true */ }
        EmitIfNotNull(emitter, serializer, "bind", vol.Bind);
        EmitIfNotNull(emitter, serializer, "volume", vol.Volume);
        EmitIfNotNull(emitter, serializer, "tmpfs", vol.Tmpfs);
        emitter.Emit(new MappingEnd());
    }

    private static bool IsStringShorthand(ComposeServiceVolumesConfig vol)
        => vol.Source is not null && vol.Target is not null
        && vol.Type is null or "bind" or "volume"
        && vol.Bind is null && vol.Volume is null && vol.Tmpfs is null;
}

The pattern is identical across all six converters: check if the value is "simple enough" for the shorthand, emit a scalar if yes, emit a full mapping if no. Read the reverse direction by checking if the parser sees a scalar or a mapping. The only thing that changes between converters is which properties constitute "simple enough" and how the shorthand string is formatted.

I could have abstracted this into a generic base class. I chose not to. Each converter is 60-80 lines. A generic abstraction saves some duplication but introduces indirection. This is infrastructure code that should be boring and obvious.


The Extensions Converter

Extensions are different from the other union types. They are not a field that can be a scalar or an object. They are a dictionary of x-* keys that must appear at the same level as the other fields of the parent object, not nested under a key called extensions.

In YAML, a compose file with extensions looks like this:

services:
  web:
    image: nginx:latest
    ports:
      - "80:80"
    x-custom-label: my-value
    x-deploy-options:
      replicas: 3
      region: us-east-1

The x-custom-label and x-deploy-options keys are at the same level as image and ports. They are not under an extensions key. But in the C# model, they are stored in an Extensions dictionary:

public class ComposeService
{
    public string? Image { get; set; }
    public List<ComposeServicePortsConfig>? Ports { get; set; }
    // ... 65 more properties ...

    /// <summary>
    /// Extension fields (x-* keys). Serialized at the parent level,
    /// not nested under an "extensions" key.
    /// </summary>
    public Dictionary<string, object?>? Extensions { get; set; }
}

The DictionaryExtensionsConverter handles the flattening. YamlDotNet's type converter API processes one object at a time, so extensions must be emitted inside the same YAML mapping as the other properties. The converter is called by each parent object's converter -- you saw it in the build converter above as DictionaryExtensionsConverter.EmitExtensions(...).

public sealed class DictionaryExtensionsConverter
{
    public static void EmitExtensions(
        IEmitter emitter, ObjectSerializer serializer,
        Dictionary<string, object?>? extensions)
    {
        if (extensions is null or { Count: 0 }) return;

        foreach (var (key, value) in extensions.OrderBy(kvp => kvp.Key))
        {
            if (!key.StartsWith("x-"))
                throw new InvalidOperationException(
                    $"Extension key '{key}' does not start with 'x-'.");

            emitter.Emit(new Scalar(key));
            serializer.SerializeValue(emitter, value, value?.GetType() ?? typeof(object));
        }
    }
}

The OrderBy makes the output deterministic -- important for the diff workflow. The x- prefix validation is a safety net: without it, someone could add a key like image to the Extensions dictionary and produce a YAML file with duplicate keys.

var service = new ComposeService
{
    Image = "nginx:latest",
    Extensions = new()
    {
        ["x-custom-label"] = "my-value",
        ["x-deploy-options"] = new Dictionary<string, object?> { ["replicas"] = 3 }
    }
};
// Serializes to:
//   image: nginx:latest
//   x-custom-label: my-value
//   x-deploy-options:
//     replicas: 3

On the read side, any unrecognized x-* key goes into the Extensions dictionary. Extensions round-trip correctly -- the key order might change, but the structural content is preserved.


Round-Trip Testing

Serialization is only correct if it is verifiable. "It looks right" is not a test strategy. I needed automated round-trip validation: deserialize a known YAML file, re-serialize it, and verify that the result is structurally equivalent.

String comparison is too strict -- YAML allows different key ordering, quoting styles, and whitespace. So the round-trip test parses both into object graphs and compares structurally.

[Theory]
[InlineData("docker-compose-simple.yml")]
[InlineData("docker-compose-full.yml")]
[InlineData("docker-compose-networks.yml")]
[InlineData("docker-compose-volumes.yml")]
[InlineData("docker-compose-depends-on.yml")]
[InlineData("docker-compose-healthcheck.yml")]
[InlineData("docker-compose-deploy.yml")]
[InlineData("docker-compose-build-variants.yml")]
[InlineData("docker-compose-ports-variants.yml")]
[InlineData("docker-compose-extensions.yml")]
public async Task RoundTrip_PreservesStructure(string fixture)
{
    var originalYaml = await File.ReadAllTextAsync($"fixtures/{fixture}");
    var file = ComposeFileDeserializer.Deserialize(originalYaml);
    var reserializedYaml = ComposeFileSerializer.Serialize(file);

    // Parse both as YAML documents and compare structurally
    // (string comparison is too strict -- ordering, whitespace, quoting)
    var deserializer = new DeserializerBuilder().Build();
    var original = deserializer.Deserialize<object>(originalYaml);
    var reserialized = deserializer.Deserialize<object>(reserializedYaml);
    Assert.Equivalent(original, reserialized);
}

The Assert.Equivalent is xUnit's deep structural comparison. It compares dictionaries key by key, lists element by element, scalars by value. Key ordering does not matter. String representation does not matter ("80" and 80 both deserialize to the same thing). What matters is that the same data went in and came out.

Each fixture is a real compose file from the compose-spec repository, Docker documentation, or my own production stacks. Ten fixtures, 420 lines of YAML, exercising all six type converters.

Schema version coverage

The "32 schema versions" headline comes from a separate test suite:

[Theory]
[MemberData(nameof(AllSchemaVersions))]
public void Serialize_ProducesValidYaml_ForSchemaVersion(string version)
{
    var file = CreateMinimalComposeFile(version);
    var yaml = ComposeFileSerializer.Serialize(file);

    // Parse with YamlDotNet -- if it parses, the YAML is syntactically valid
    var deserializer = new DeserializerBuilder().Build();
    var parsed = deserializer.Deserialize<object>(yaml);
    Assert.NotNull(parsed);

    // Validate against the JSON Schema for this version
    var schema = LoadSchema(version);
    var jsonFromYaml = ConvertYamlToJson(yaml);
    var result = schema.Validate(jsonFromYaml);
    Assert.True(result.IsValid, $"Schema validation failed for {version}: {result}");
}

public static IEnumerable<object[]> AllSchemaVersions()
{
    // 32 versions from compose-spec v3.0 through v3.8, v2.0 through v2.4,
    // and the unified spec versions
    return SchemaVersionRegistry.All.Select(v => new object[] { v });
}

Each version gets a ComposeFile with version-appropriate properties, serialized to YAML, converted to JSON, and validated against the official schema. All 32 pass.


The Developer Loop: Change, Compile, Diff

Everything above is plumbing. The payoff is the developer workflow. Infrastructure changes should feel like code changes: modify a line, build, review the diff, deploy.

Diagram
The change-compile-diff loop typed-docker targets — a C# property edit compiles into an updated docker-compose.yml whose git diff reads like a code review, making infrastructure changes feel exactly like code changes.

Step 1: Change a C# property

// Before
.WithService("web", s => s
    .WithImage("nginx:1.24")
    .WithPort("80:80"))

// After
.WithService("web", s => s
    .WithImage("nginx:1.25")
    .WithPort("80:80")
    .WithPort("443:443")
    .WithHealthcheck(h => h
        .WithTest("CMD", "curl", "-f", "http://localhost/")
        .WithInterval("30s")
        .WithTimeout("10s")
        .WithRetries(3)))

IntelliSense shows every property. If I write .WithRetries("three") instead of .WithRetries(3), the build fails immediately.

Step 3: Diff

 services:
   web:
-    image: nginx:1.24
+    image: nginx:1.25
     ports:
       - "80:80"
+      - "443:443"
+    healthcheck:
+      test: ["CMD", "curl", "-f", "http://localhost/"]
+      interval: 30s
+      timeout: 10s
+      retries: 3

One C# change produces one YAML diff. This is the entire point. Here is a more complex scenario -- upgrading the database, adding a cache layer, enabling file watching:

 services:
   db:
-    image: postgres:15
+    image: postgres:16
     environment: ...
     volumes: ...
+    healthcheck:
+      test: ["CMD-SHELL", "pg_isready -U postgres"]
+      interval: 10s
+      retries: 5
+  redis:
+    image: redis:7-alpine
+    ports:
+      - "6379:6379"
   web:
     depends_on:
-      - db
+      - db
+      - redis
+    develop:
+      watch:
+        - action: sync
+          path: ./src
+          target: /app/src

Four C# changes, four diff blocks. The develop section requires compose-spec 1.19.0+ -- the [SinceVersion("1.19.0")] annotation tells me that at edit time, not at deploy time.


Generated vs Hand-Written: A Complete Stack

To make this concrete, here is a 5-service stack: reverse proxy, web application, background worker, database, and cache. First the C# contributor code that builds the ComposeFile, then the generated YAML, then what you would write by hand.

The C# code

var composeFile = new ComposeFileBuilder()
    .WithService("traefik", s => s
        .WithImage("traefik:v3.0")
        .WithCommand("--api.insecure=true", "--providers.docker=true",
                     "--providers.docker.exposedbydefault=false",
                     "--entrypoints.web.address=:80", "--entrypoints.websecure.address=:443")
        .WithPort("80:80").WithPort("443:443").WithPort("8080:8080")
        .WithVolume("/var/run/docker.sock:/var/run/docker.sock:ro")
        .WithRestart("unless-stopped"))
    .WithService("web", s => s
        .WithImage("myapp/web:${TAG:-latest}")
        .WithBuild(b => b.WithContext(".").WithDockerfile("src/Web/Dockerfile")
            .WithArgs(new() { ["BUILD_CONFIG"] = "Release" }))
        .WithEnvironment("ASPNETCORE_ENVIRONMENT", "Production")
        .WithEnvironment("ConnectionStrings__Default",
            "Host=db;Database=myapp;Username=postgres;Password=secret")
        .WithEnvironment("Redis__ConnectionString", "redis:6379")
        .WithDependsOn("db", "redis").WithRestart("unless-stopped")
        .WithLabels(new() {
            ["traefik.enable"] = "true",
            ["traefik.http.routers.web.rule"] = "Host(`app.example.com`)",
            ["traefik.http.services.web.loadbalancer.server.port"] = "80" })
        .WithHealthcheck(h => h.WithTest("CMD", "curl", "-f", "http://localhost/health")
            .WithInterval("15s").WithTimeout("5s").WithRetries(3).WithStartPeriod("30s")))
    .WithService("worker", s => s
        .WithImage("myapp/worker:${TAG:-latest}")
        .WithBuild(b => b.WithContext(".").WithDockerfile("src/Worker/Dockerfile"))
        .WithEnvironment("ConnectionStrings__Default",
            "Host=db;Database=myapp;Username=postgres;Password=secret")
        .WithEnvironment("Redis__ConnectionString", "redis:6379")
        .WithDependsOn("db", "redis").WithRestart("unless-stopped"))
    .WithService("db", s => s
        .WithImage("postgres:16-alpine")
        .WithEnvironment("POSTGRES_DB", "myapp")
        .WithEnvironment("POSTGRES_USER", "postgres")
        .WithEnvironment("POSTGRES_PASSWORD", "secret")
        .WithVolume("pgdata:/var/lib/postgresql/data").WithRestart("unless-stopped")
        .WithHealthcheck(h => h.WithTest("CMD-SHELL", "pg_isready -U postgres")
            .WithInterval("10s").WithTimeout("5s").WithRetries(5)))
    .WithService("redis", s => s
        .WithImage("redis:7-alpine")
        .WithCommand("redis-server", "--maxmemory", "256mb", "--maxmemory-policy", "allkeys-lru")
        .WithVolume("redisdata:/data").WithRestart("unless-stopped")
        .WithHealthcheck(h => h.WithTest("CMD", "redis-cli", "ping")
            .WithInterval("10s").WithTimeout("5s").WithRetries(3)))
    .WithVolume("pgdata").WithVolume("redisdata")
    .Build();

The generated YAML

services:
  traefik:
    image: traefik:v3.0
    command: ["--api.insecure=true", "--providers.docker=true", ...]
    ports: ["80:80", "443:443", "8080:8080"]
    volumes: [/var/run/docker.sock:/var/run/docker.sock:ro]
    restart: unless-stopped
  web:
    image: myapp/web:${TAG:-latest}
    build:
      context: .
      dockerfile: src/Web/Dockerfile
      args:
        BUILD_CONFIG: Release
    environment:
      ASPNETCORE_ENVIRONMENT: Production
      ConnectionStrings__Default: Host=db;Database=myapp;Username=postgres;Password=secret
      Redis__ConnectionString: redis:6379
    depends_on: [db, redis]
    restart: unless-stopped
    labels:
      traefik.enable: "true"
      traefik.http.routers.web.rule: Host(`app.example.com`)
      traefik.http.services.web.loadbalancer.server.port: "80"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost/health"]
      interval: 15s
      timeout: 5s
      retries: 3
      start_period: 30s
  worker:
    image: myapp/worker:${TAG:-latest}
    build: {context: ., dockerfile: src/Worker/Dockerfile}
    environment:
      ConnectionStrings__Default: Host=db;Database=myapp;Username=postgres;Password=secret
      Redis__ConnectionString: redis:6379
    depends_on: [db, redis]
    restart: unless-stopped
  db:
    image: postgres:16-alpine
    environment: {POSTGRES_DB: myapp, POSTGRES_USER: postgres, POSTGRES_PASSWORD: secret}
    volumes: [pgdata:/var/lib/postgresql/data]
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      retries: 5
  redis:
    image: redis:7-alpine
    command: [redis-server, "--maxmemory", 256mb, "--maxmemory-policy", allkeys-lru]
    volumes: [redisdata:/data]
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      retries: 3
volumes:
  pgdata: {}
  redisdata: {}

The build field uses the long form (because dockerfile and args are set). The ports field uses the short form (only target and published). No null properties, no synthetic artifacts.

Generated vs hand-written

I wrote the same stack by hand and diffed: diff generated.yml handwritten.yml produces no output. The generated YAML is not "close to" hand-written quality -- it is indistinguishable from it. That is the goal. If the output looked like machine output -- quoted strings where unnecessary, alphabetical property ordering, empty sequences instead of omissions -- nobody would use it.


Deserialization: The Reverse Path

The serialization pipeline is the primary path -- C# objects to YAML files. But the reverse path matters too. Reading an existing docker-compose.yml into a ComposeFile object allows modification in C# followed by re-serialization.

public static class ComposeFileDeserializer
{
    private static readonly IDeserializer Deserializer = new DeserializerBuilder()
        .WithNamingConvention(UnderscoredNamingConvention.Instance)
        .WithTypeConverter(new ComposeServiceBuildConfigConverter())
        .WithTypeConverter(new ComposeServicePortsConfigConverter())
        .WithTypeConverter(new ComposeServiceVolumesConfigConverter())
        .WithTypeConverter(new ComposeServiceDependsOnConverter())
        .WithTypeConverter(new ComposeServiceDeployConfigConverter())
        .IgnoreUnmatchedProperties()
        .Build();

    public static ComposeFile Deserialize(string yaml)
    {
        return Deserializer.Deserialize<ComposeFile>(yaml);
    }

    public static async Task<ComposeFile> ReadAsync(string path)
    {
        var yaml = await File.ReadAllTextAsync(path);
        return Deserialize(yaml);
    }
}

The deserializer mirrors the serializer: same naming convention, same type converters. The key difference is IgnoreUnmatchedProperties() -- if the YAML contains a field that does not exist in the C# model (a future compose-spec feature, a custom field without the x- prefix), the deserializer silently skips it. This is essential for forward compatibility.

The use case: read, modify, re-serialize

var file = await ComposeFileDeserializer.ReadAsync("docker-compose.yml");
file.Services!["web"].Image = "myapp:v2.0.0";
file.Services!["web"].Deploy ??= new ComposeServiceDeployConfig();
file.Services!["web"].Deploy.Replicas = 3;
await ComposeFileSerializer.WriteAsync(file, "docker-compose.yml");

This is useful for CI/CD pipelines that need to patch a compose file. The alternative is sed or yq. The typed approach catches errors at compile time -- misspell Deploy as Depoly and the compiler rejects it.

Limitations

Deserialization is not lossless. Comments are discarded (a universal YAML library limitation), key ordering may change (re-serialization uses the C# property declaration order), and string quoting style may differ ("80:80" vs 80:80). None of these affect docker compose up. All three affect humans reading diffs. For the "read-modify-write" use case in CI/CD, this is acceptable. For the "generate from scratch" use case with contributors, it is irrelevant.


Property Ordering and Determinism

Deterministic output is not optional. If the same ComposeFile object serializes to different YAML on different runs -- different key ordering, different string representations, different whitespace -- then git diff shows phantom changes and the developer loop breaks.

YamlDotNet serializes properties in declaration order. The source generator emits properties in an order that matches the compose specification's conventional layout: identity (image, build), runtime (command, entrypoint), configuration (environment, env_file), networking (ports, expose, networks), storage (volumes), dependencies (depends_on), lifecycle (restart, healthcheck, deploy), and extensions last. This matches what most humans write and what docker compose config outputs.

Dictionary values (environment, labels, extensions) are sorted by key at serialization time. List values (ports, volumes, depends_on) preserve insertion order. The combination produces output that is byte-identical across runs for the same input:

[Fact]
public void Serialize_IsDeterministic()
{
    var file = CreateFullComposeFile(); // All 5 services, all property types

    var yaml1 = ComposeFileSerializer.Serialize(file);
    var yaml2 = ComposeFileSerializer.Serialize(file);
    var yaml3 = ComposeFileSerializer.Serialize(file);

    Assert.Equal(yaml1, yaml2);
    Assert.Equal(yaml2, yaml3);
}

Three serializations of the same object produce byte-identical output. This test runs on every build.


Performance

I benchmarked the full pipeline -- object creation, serialization, file write -- for stacks of varying sizes:

| Services | Properties set | Serialize (ms) | YAML lines |
|----------|---------------|----------------|------------|
| 1        | 3             | 0.4            | 5          |
| 5        | 47            | 1.2            | 73         |
| 15       | 140           | 3.1            | 210        |
| 50       | 480           | 9.8            | 720        |

Under 10ms for 50 services. The serializer is not a bottleneck. The only operation in the developer loop that takes meaningful time is the compilation itself. I chose YamlDotNet because it is the standard .NET YAML library, it supports custom type converters, and it handles the full YAML specification. For compose files measured in hundreds of lines, performance is noise.


Error Handling

The C# type system constrains ComposeFile objects to valid shapes -- the builder validates property values, the compiler enforces types, so any object that compiles will serialize to valid YAML. The one edge case is extensions: if the Extensions dictionary contains a key without the x- prefix, the serializer throws immediately:

var service = new ComposeService
{
    Image = "nginx:latest",
    Extensions = new() { ["not-an-extension"] = "oops" }
};
// InvalidOperationException: Extension key 'not-an-extension' does not start with 'x-'.

Fail fast, fail loud, fail with a message that tells you what to fix. Not "YAML parse error at line 47" from docker compose up twenty minutes later.


What This Enables

The serialization pipeline bridges the type system (Part X, Part XI) and the practical workflows (Part XIII, Part XIV). With it in place: contributors can build and test ComposeFile objects in isolation, CI/CD pipelines can generate environment-specific compose files from the same C# code, version migration becomes a property rename and rebuild, and infrastructure review works like code review -- diffs show exactly what changed.

The serialization layer is invisible when it works. That is the point. You think in C#, the YAML appears. You never think about indentation, quoting, key names, null handling, union type syntax, or extension field placement. The type converters handle it all, once, tested against 32 schema versions.


Closing

Typed C# in, valid YAML out. The serialization pipeline handles union types, null omission, naming conventions, and extension round-trips. The developer loop -- change a C# property, rebuild, diff the YAML -- makes infrastructure changes feel like code changes. Round-trip tested against 32 schema versions. Output indistinguishable from hand-written YAML.

Part XIII shows how this becomes practical at scale: the contributor pattern that lets each service be a self-contained, testable, composable unit.


The serializer, type converters, and round-trip tests are from FrenchExDev.Net.DockerCompose.Bundle. The converters are source-generated alongside the model types.

⬇ Download