The serializer surface
TraefikSerializer is the entire runtime API. It has four logical sections:
- The throwing YAML API (
Deserialize,DeserializeStatic,DeserializeDynamic,Serialize) - JSON I/O via
System.Text.Json(SerializeJson,DeserializeJson) - The schema-validating
Try*API that runsJsonSchema.Netagainst the embedded schema before returning a typed POCO - Async file I/O with atomic rename, the API the Traefik file provider use case actually wants
The class is just over 350 lines, mostly because each surface has a static and a dynamic variant. The interesting design choices are concentrated in the configuration block at the top:
public static class TraefikSerializer
{
private static readonly IDeserializer Deserializer = new DeserializerBuilder()
.WithNamingConvention(CamelCaseNamingConvention.Instance)
.IgnoreUnmatchedProperties()
.Build();
private static readonly ISerializer Serializer = new SerializerBuilder()
.WithNamingConvention(CamelCaseNamingConvention.Instance)
.ConfigureDefaultValuesHandling(DefaultValuesHandling.OmitNull)
.Build();
private static readonly JsonSerializerOptions JsonOptions = new()
{
PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull,
WriteIndented = false,
};
private static readonly Lazy<JsonSchema?> StaticSchema = new(
() => LoadEmbeddedSchema("traefik-v3-static.json"));
private static readonly Lazy<JsonSchema?> DynamicSchema = new(
() => LoadEmbeddedSchema("traefik-v3-file-provider.json"));
// …
}public static class TraefikSerializer
{
private static readonly IDeserializer Deserializer = new DeserializerBuilder()
.WithNamingConvention(CamelCaseNamingConvention.Instance)
.IgnoreUnmatchedProperties()
.Build();
private static readonly ISerializer Serializer = new SerializerBuilder()
.WithNamingConvention(CamelCaseNamingConvention.Instance)
.ConfigureDefaultValuesHandling(DefaultValuesHandling.OmitNull)
.Build();
private static readonly JsonSerializerOptions JsonOptions = new()
{
PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull,
WriteIndented = false,
};
private static readonly Lazy<JsonSchema?> StaticSchema = new(
() => LoadEmbeddedSchema("traefik-v3-static.json"));
private static readonly Lazy<JsonSchema?> DynamicSchema = new(
() => LoadEmbeddedSchema("traefik-v3-file-provider.json"));
// …
}Three trade-offs encoded here:
CamelCaseNamingConventionfor both YAML and JSON, because Traefik's wire format is camelCase (entryPoints,passHostHeader,loadBalancer). The C# side stays PascalCase for IDE conventions; the convention bridges the two without per-property[YamlMember]attributes.IgnoreUnmatchedProperties()on the deserializer. A future Traefik version that adds new keys must not crash deserialization in old consumers. The schema-validatingTry*API closes the gap by also schema-validating the YAML against the embedded schema, which catches typos as schema violations before the typed deserializer ever runs.OmitNullon serialization. A Traefik config with explicitnulleverywhere is unusable;OmitNullproduces minimal output and matches what Traefik's documentation examples look like.
The flat-union problem (and YamlDotNet's behavior)
Recall from Part 5 that TraefikHttpMiddleware is a flat class with 25 nullable properties. YamlDotNet handles this naturally on the deserialize side: a YAML key like basicAuth: matches the BasicAuth property by camelCase, the matching nested structure is built, every other property stays null. On the serialize side, OmitNull drops every unset branch. Round-trip preserves the shape without any custom converter.
This is the entire reason the flat-union pattern is acceptable. A discriminated union represented as a tagged record (abstract record TraefikHttpMiddleware { record AddPrefix(…) : TraefikHttpMiddleware; … }) would need a custom converter in both YamlDotNet and System.Text.Json, and the converter would have to know about every concrete type. The flat shape is one tiny C# language compromise (no compile-time exclusivity) traded for zero serialization machinery.
Schema-validating deserialization
The throwing API does no schema validation — it's there for the cases where you trust your input. The interesting path is TryDeserializeStatic / TryDeserializeDynamic, which run the embedded schema against the original YAML before doing the typed projection:
private static Result<T> TryDeserializeWithSchema<T>(string yaml, JsonSchema? schema) where T : notnull
{
if (schema is null)
{
return Result<T>.Failure(new ValidationResult(
"Embedded JSON schema could not be loaded."));
}
// Step 1: parse the YAML into a JsonNode tree that preserves the
// *original* shape — including keys the typed deserializer would
// silently drop because of IgnoreUnmatchedProperties. YamlToJson
// honours YAML 1.2 core scalar resolution (true/false → bool,
// 42 → int, 3.14 → float, etc.) so JsonSchema.Net sees real types.
JsonNode? node;
try
{
node = YamlToJson.Parse(yaml);
}
catch (Exception ex)
{
return Result<T>.Failure(new ValidationResult(
$"YAML parse failure: {ex.Message}"));
}
if (node is null)
return Result<T>.Failure(new ValidationResult("YAML document is empty."));
// Step 2: schema-validate the original YAML shape. This catches
// typo'd keys (`additionalProperties: false` in the schema) AND
// type errors (string where bool expected, etc.).
try
{
using var doc = JsonDocument.Parse(node.ToJsonString());
var evaluation = schema.Evaluate(doc.RootElement, new EvaluationOptions
{
OutputFormat = OutputFormat.List,
});
if (!evaluation.IsValid)
return Result<T>.Failure(BuildValidationResult(evaluation));
}
catch (Exception ex)
{
return Result<T>.Failure(new ValidationResult(
$"Schema validation failed: {ex.Message}"));
}
// Step 3: only after the schema is happy, deserialize into the
// typed POCO. The schema has already vetted the shape; this is
// just the type projection.
try
{
var typed = Deserializer.Deserialize<T>(yaml);
return Result<T>.Success(typed);
}
catch (Exception ex)
{
return Result<T>.Failure(new ValidationResult(
$"Typed deserialization failed after schema validation: {ex.Message}"));
}
}private static Result<T> TryDeserializeWithSchema<T>(string yaml, JsonSchema? schema) where T : notnull
{
if (schema is null)
{
return Result<T>.Failure(new ValidationResult(
"Embedded JSON schema could not be loaded."));
}
// Step 1: parse the YAML into a JsonNode tree that preserves the
// *original* shape — including keys the typed deserializer would
// silently drop because of IgnoreUnmatchedProperties. YamlToJson
// honours YAML 1.2 core scalar resolution (true/false → bool,
// 42 → int, 3.14 → float, etc.) so JsonSchema.Net sees real types.
JsonNode? node;
try
{
node = YamlToJson.Parse(yaml);
}
catch (Exception ex)
{
return Result<T>.Failure(new ValidationResult(
$"YAML parse failure: {ex.Message}"));
}
if (node is null)
return Result<T>.Failure(new ValidationResult("YAML document is empty."));
// Step 2: schema-validate the original YAML shape. This catches
// typo'd keys (`additionalProperties: false` in the schema) AND
// type errors (string where bool expected, etc.).
try
{
using var doc = JsonDocument.Parse(node.ToJsonString());
var evaluation = schema.Evaluate(doc.RootElement, new EvaluationOptions
{
OutputFormat = OutputFormat.List,
});
if (!evaluation.IsValid)
return Result<T>.Failure(BuildValidationResult(evaluation));
}
catch (Exception ex)
{
return Result<T>.Failure(new ValidationResult(
$"Schema validation failed: {ex.Message}"));
}
// Step 3: only after the schema is happy, deserialize into the
// typed POCO. The schema has already vetted the shape; this is
// just the type projection.
try
{
var typed = Deserializer.Deserialize<T>(yaml);
return Result<T>.Success(typed);
}
catch (Exception ex)
{
return Result<T>.Failure(new ValidationResult(
$"Typed deserialization failed after schema validation: {ex.Message}"));
}
}The three-step shape is the load-bearing detail:
- Parse the YAML to a
JsonNodetree (via the in-houseYamlToJsonhelper) soJsonSchema.NetcanEvaluateit. This is also where YAML 1.2 scalar resolution happens —true/42/3.14get the right JSON type, not all-strings, so the schema's type checks fire correctly. - Schema-evaluate the original node tree, not the typed POCO. This is the only way to catch typo'd keys: the typed deserializer would silently drop them (
IgnoreUnmatchedProperties), but the schema'sadditionalProperties: falserejects them. - Only then deserialize into the typed POCO. The schema has already vetted the shape; this is essentially a type projection.
If any step fails, the error message is preserved end-to-end — schema errors get aggregated by BuildValidationResult/CollectErrors, which walks the EvaluationResults tree and joins per-instance-location errors with semicolons. The consumer gets back a Result<TraefikStaticConfig> whose failure carries enough information to point at the bad key.
Atomic file writes for the file-provider use case
The reason most consumers want to generate Traefik dynamic config from .NET in the first place is that they want a long-running process to write dynamic.yml and have Traefik's file provider hot-reload it. There is exactly one way to do this safely: write the new config to a sibling .tmp file, then atomically rename it over the destination. A half-written file caught mid-watch-cycle will crash Traefik.
The serializer ships this primitive built-in:
public static Task<ResultUnit> WriteDynamicToFileAsync(
string path, TraefikDynamicConfig config, CancellationToken ct = default)
=> WriteToFileAsyncCore(path, config, DynamicSchema.Value, ct);
private static async Task<ResultUnit> WriteToFileAsyncCore<T>(
string path, T config, JsonSchema? schema, CancellationToken ct) where T : notnull
{
if (schema is null) return ResultUnit.Failure();
// Validate the typed config against the embedded schema *before*
// touching the disk. A serializer that produces an invalid Traefik
// config is a bug; this surfaces it at write time rather than at
// Traefik's startup.
if (!TryValidateAgainstSchema(config, schema, out _))
return ResultUnit.Failure();
var yaml = Serializer.Serialize(config!);
var tmpPath = path + ".tmp";
try
{
// Write the temp file fully (and fsync via DisposeAsync) before
// touching the destination, then atomically rename. File.Replace
// exists on Windows + .NET; File.Move handles the no-target case.
await File.WriteAllTextAsync(tmpPath, yaml, ct).ConfigureAwait(false);
const int maxAttempts = 3;
for (var attempt = 0; attempt < maxAttempts; attempt++)
{
ct.ThrowIfCancellationRequested();
try
{
if (File.Exists(path))
File.Replace(tmpPath, path, destinationBackupFileName: null);
else
File.Move(tmpPath, path);
return ResultUnit.Success();
}
catch (IOException) when (attempt < maxAttempts - 1)
{
await Task.Delay(50, ct).ConfigureAwait(false);
}
}
return ResultUnit.Failure();
}
catch (Exception)
{
try { if (File.Exists(tmpPath)) File.Delete(tmpPath); } catch { /* best effort */ }
return ResultUnit.Failure();
}
}public static Task<ResultUnit> WriteDynamicToFileAsync(
string path, TraefikDynamicConfig config, CancellationToken ct = default)
=> WriteToFileAsyncCore(path, config, DynamicSchema.Value, ct);
private static async Task<ResultUnit> WriteToFileAsyncCore<T>(
string path, T config, JsonSchema? schema, CancellationToken ct) where T : notnull
{
if (schema is null) return ResultUnit.Failure();
// Validate the typed config against the embedded schema *before*
// touching the disk. A serializer that produces an invalid Traefik
// config is a bug; this surfaces it at write time rather than at
// Traefik's startup.
if (!TryValidateAgainstSchema(config, schema, out _))
return ResultUnit.Failure();
var yaml = Serializer.Serialize(config!);
var tmpPath = path + ".tmp";
try
{
// Write the temp file fully (and fsync via DisposeAsync) before
// touching the destination, then atomically rename. File.Replace
// exists on Windows + .NET; File.Move handles the no-target case.
await File.WriteAllTextAsync(tmpPath, yaml, ct).ConfigureAwait(false);
const int maxAttempts = 3;
for (var attempt = 0; attempt < maxAttempts; attempt++)
{
ct.ThrowIfCancellationRequested();
try
{
if (File.Exists(path))
File.Replace(tmpPath, path, destinationBackupFileName: null);
else
File.Move(tmpPath, path);
return ResultUnit.Success();
}
catch (IOException) when (attempt < maxAttempts - 1)
{
await Task.Delay(50, ct).ConfigureAwait(false);
}
}
return ResultUnit.Failure();
}
catch (Exception)
{
try { if (File.Exists(tmpPath)) File.Delete(tmpPath); } catch { /* best effort */ }
return ResultUnit.Failure();
}
}The retry loop on IOException is not academic — Windows occasionally returns ERROR_SHARING_VIOLATION if the destination file is being read at the exact moment of the rename, which is precisely the race the file provider produces (Traefik reads, you write, both happen at the same instant). Three attempts with a 50 ms gap is enough in practice to absorb the contention.
Two safety properties guaranteed by this function:
- The destination file is never half-written. Either it's the old content or the new content; never a torn write.
- An invalid config never reaches disk. Schema validation runs before any byte is written. If your code path has a bug that produces a malformed config, the temp file is never created.
The realistic sample
Synthetic test fixtures don't catch enough edge cases. samples/realistic-dynamic.yaml is a hand-written ~2 KB Traefik dynamic config that exercises HTTP + TCP routers, multiple middleware types, weighted load balancing, TLS, and a chain of middlewares on one router:
# Realistic Traefik dynamic configuration sample.
# Exercises multiple middlewares (auth, prefix, headers, retry), TLS options,
# weighted load balancing, and a TCP router. Used by RealisticRoundTripTests
# to flush out edge cases the synthetic minimal fixtures don't reach.
http:
routers:
api-router:
rule: "Host(`api.example.com`) && PathPrefix(`/v1`)"
entryPoints:
- websecure
service: api-backend
middlewares:
- api-auth
- api-strip-prefix
- api-rate-limit
tls:
certResolver: letsencrypt
web-router:
rule: "Host(`www.example.com`)"
entryPoints:
- web
- websecure
service: web-frontend
middlewares:
- secure-headers
priority: 100
services:
api-backend:
loadBalancer:
servers:
- url: "http://api-1.internal:8080"
- url: "http://api-2.internal:8080"
passHostHeader: true
responseForwarding:
flushInterval: "100ms"
healthCheck:
path: "/healthz"
interval: "30s"
timeout: "5s"
web-frontend:
loadBalancer:
servers:
- url: "http://web-1.internal:3000"
middlewares:
api-auth:
basicAuth:
users:
- "admin:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/"
realm: "API"
api-strip-prefix:
stripPrefix:
prefixes:
- "/v1"
api-rate-limit:
rateLimit:
average: 100
burst: 50
secure-headers:
headers:
frameDeny: true
sslRedirect: true
stsSeconds: 31536000
customResponseHeaders:
X-Frame-Options: "DENY"
X-Content-Type-Options: "nosniff"
tcp:
routers:
postgres-router:
rule: "HostSNI(`db.example.com`)"
entryPoints:
- postgres
service: postgres-backend
tls:
passthrough: true
services:
postgres-backend:
loadBalancer:
servers:
- address: "postgres-1.internal:5432"# Realistic Traefik dynamic configuration sample.
# Exercises multiple middlewares (auth, prefix, headers, retry), TLS options,
# weighted load balancing, and a TCP router. Used by RealisticRoundTripTests
# to flush out edge cases the synthetic minimal fixtures don't reach.
http:
routers:
api-router:
rule: "Host(`api.example.com`) && PathPrefix(`/v1`)"
entryPoints:
- websecure
service: api-backend
middlewares:
- api-auth
- api-strip-prefix
- api-rate-limit
tls:
certResolver: letsencrypt
web-router:
rule: "Host(`www.example.com`)"
entryPoints:
- web
- websecure
service: web-frontend
middlewares:
- secure-headers
priority: 100
services:
api-backend:
loadBalancer:
servers:
- url: "http://api-1.internal:8080"
- url: "http://api-2.internal:8080"
passHostHeader: true
responseForwarding:
flushInterval: "100ms"
healthCheck:
path: "/healthz"
interval: "30s"
timeout: "5s"
web-frontend:
loadBalancer:
servers:
- url: "http://web-1.internal:3000"
middlewares:
api-auth:
basicAuth:
users:
- "admin:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/"
realm: "API"
api-strip-prefix:
stripPrefix:
prefixes:
- "/v1"
api-rate-limit:
rateLimit:
average: 100
burst: 50
secure-headers:
headers:
frameDeny: true
sslRedirect: true
stsSeconds: 31536000
customResponseHeaders:
X-Frame-Options: "DENY"
X-Content-Type-Options: "nosniff"
tcp:
routers:
postgres-router:
rule: "HostSNI(`db.example.com`)"
entryPoints:
- postgres
service: postgres-backend
tls:
passthrough: true
services:
postgres-backend:
loadBalancer:
servers:
- address: "postgres-1.internal:5432"RealisticRoundTripTests round-trips this fixture through the typed model and back:
[Fact]
public void DeserializeDynamic_RealisticSample_PreservesDiscriminatedMiddlewareBranches()
{
var yaml = File.ReadAllText("Samples/realistic-dynamic.yaml");
var config = TraefikSerializer.DeserializeDynamic(yaml);
var auth = config.Http!.Middlewares!["api-auth"];
auth.BasicAuth.ShouldNotBeNull();
auth.StripPrefix.ShouldBeNull();
var stripPrefix = config.Http.Middlewares["api-strip-prefix"];
stripPrefix.StripPrefix.ShouldNotBeNull();
stripPrefix.BasicAuth.ShouldBeNull();
}
[Fact]
public void RoundTrip_RealisticSample_StableShape()
{
var yaml = File.ReadAllText("Samples/realistic-dynamic.yaml");
var first = TraefikSerializer.DeserializeDynamic(yaml);
var serialized = TraefikSerializer.Serialize(first);
var second = TraefikSerializer.DeserializeDynamic(serialized);
second.Http!.Routers!.Keys.ShouldBe(first.Http!.Routers!.Keys, ignoreOrder: true);
second.Http.Services!.Keys.ShouldBe(first.Http.Services!.Keys, ignoreOrder: true);
second.Http.Middlewares!.Keys.ShouldBe(first.Http.Middlewares!.Keys, ignoreOrder: true);
second.Tcp!.Routers!.Keys.ShouldBe(first.Tcp!.Routers!.Keys, ignoreOrder: true);
}[Fact]
public void DeserializeDynamic_RealisticSample_PreservesDiscriminatedMiddlewareBranches()
{
var yaml = File.ReadAllText("Samples/realistic-dynamic.yaml");
var config = TraefikSerializer.DeserializeDynamic(yaml);
var auth = config.Http!.Middlewares!["api-auth"];
auth.BasicAuth.ShouldNotBeNull();
auth.StripPrefix.ShouldBeNull();
var stripPrefix = config.Http.Middlewares["api-strip-prefix"];
stripPrefix.StripPrefix.ShouldNotBeNull();
stripPrefix.BasicAuth.ShouldBeNull();
}
[Fact]
public void RoundTrip_RealisticSample_StableShape()
{
var yaml = File.ReadAllText("Samples/realistic-dynamic.yaml");
var first = TraefikSerializer.DeserializeDynamic(yaml);
var serialized = TraefikSerializer.Serialize(first);
var second = TraefikSerializer.DeserializeDynamic(serialized);
second.Http!.Routers!.Keys.ShouldBe(first.Http!.Routers!.Keys, ignoreOrder: true);
second.Http.Services!.Keys.ShouldBe(first.Http.Services!.Keys, ignoreOrder: true);
second.Http.Middlewares!.Keys.ShouldBe(first.Http.Middlewares!.Keys, ignoreOrder: true);
second.Tcp!.Routers!.Keys.ShouldBe(first.Tcp!.Routers!.Keys, ignoreOrder: true);
}The PreservesDiscriminatedMiddlewareBranches test is the empirical proof that the flat-union pattern from Part 5 round-trips correctly: every middleware in the sample has exactly one branch populated coming in, and exactly one populated coming out, across both YAML deserialization and re-serialization. The ignoreOrder: true on RoundTrip_RealisticSample_StableShape is the honest acknowledgement that YAML→object→YAML re-orders dictionary keys; structural equality on the content is what matters.
← Part 7: Catching Misuse at Edit-Time · Next: Part 9 — Tests, Property Checks, and Quality Gates →