The Compiler as Architect
"The compiler doesn't get tired on Friday afternoon. It doesn't skip the checklist. It doesn't forget the wiki page. It enforces the same rules on every build, every developer, every branch."
Ten domains. Ten wiki pages. Ten enforcement test suites. Ten convention documents that drift, rot, and confuse. And ten attributes that replace all of it with compile-time generation and compile-time enforcement.
This final part does three things. First, it totals the bill — the Grand Convention Tax across every domain this series has covered. Second, it extracts the meta-pattern: the reusable architecture that every SG+Analyzer pair follows, regardless of domain. Third, it draws the boundaries — because Contention is not free, and pushing it too far creates its own kind of debt.
The synthesis is not a victory lap. It is an accounting exercise, a blueprint, and a warning.
The Grand Convention Tax Table
Every part in this series measured two things: the cost of Convention (documentation + enforcement code) and the cost of Contention (attribute + generated code + analyzer). This table collects all ten measurements into a single view.
The "Convention Documentation" column counts lines of wiki pages, ADRs, onboarding guides, and coding standards documents. The "Convention Enforcement" column counts lines of ArchUnit tests, NetArchTest assertions, CI scripts, and custom analyzers written by hand. Together, they represent overhead — work that exists solely to police invisible rules.
The "Contention" columns show what replaces all of that: an attribute declaration, a Source Generator that produces correct code, and an analyzer that produces compile errors for structural violations.
| Domain | Convention Documentation (lines) | Convention Enforcement (lines) | Convention Total Overhead | Contention: Attribute | Contention: Generated (by SG) | Contention: Enforced (by analyzer) |
|---|---|---|---|---|---|---|
| DI | ~50 (wiki + ADR) | ~85 (registration tests, scanning config) | ~135 | [Injectable(Lifetime.Scoped)] |
DI registration, lifetime validation | INJ001 missing attribute, INJ002 circular dependency |
| Validation | ~60 (validator naming convention, required fields guide) | ~90 (every-command-has-validator tests, pipeline tests) | ~150 | [Validated] |
FluentValidation-equivalent from required properties |
VAL001 missing validator, VAL002 unreachable rule |
| API Contracts | ~80 (API guidelines, response format standard, versioning policy) | ~120 (contract tests, OpenAPI diff checks, response shape tests) | ~200 | [TypedEndpoint] |
Endpoint, OpenAPI spec, typed client | API001 missing response type, API002 undocumented status code |
| Database Mapping | ~100 (EF conventions doc, naming rules, cascade behavior guide) | ~130 (entity-in-correct-folder tests, naming convention tests, relationship tests) | ~230 | [AggregateRoot] / [ValueObject] |
EF configuration, repository, factory | DDD001 entity outside aggregate, DDD002 value object with identity |
| Testing | ~70 (test plan template, naming convention, coverage thresholds doc) | ~80 (test naming convention tests, coverage gate CI script) | ~150 | [ForRequirement(typeof(Feature))] |
Compliance matrix, coverage report | TST001 untested requirement, TST002 orphaned test |
| Architecture | ~90 (ADR collection, layer rules, dependency direction doc) | ~110 (NetArchTest suite, layer violation tests, CI checks) | ~200 | [Layer("Domain")] |
InternalsVisibleTo restrictions, dependency constraints |
ARCH001 illegal dependency, ARCH002 layer bypass |
| Configuration | ~40 (options documentation, binding convention, validation rules) | ~70 (options binding tests, startup integration tests) | ~110 | [StronglyTypedOptions("Smtp")] |
Binder, validator, registration | OPT001 missing section, OPT002 unvalidated option |
| Error Handling | ~60 (error handling guide, Result usage standard, exception policy) | ~100 (Result pattern tests, exhaustive match tests, exception leak tests) | ~160 | [MustHandle] |
Exhaustive Match() methods, error catalog |
ERR001 unmatched Result, ERR002 swallowed error |
| Logging | ~50 (logging standard, structured logging guide, sensitive data policy) | ~60 (log level tests, structured property tests) | ~110 | [LoggerMessage] |
High-performance log methods, structured parameters | LOG001 unstructured log call, LOG002 sensitive data in log |
| Security | ~80 (security policy, authorization model doc, permission matrix) | ~100 (authorization tests, permission coverage tests, role mapping tests) | ~180 | [RequiresPermission(Permission.X)] |
Policy registration, permission checks | SEC001 unprotected endpoint, SEC002 missing permission |
| TOTAL | ~680 | ~945 | ~1,625 | 10 attributes | Fully generated | 20 analyzer rules |
1,625 lines of documentation and enforcement code. Written by hand. Maintained by hand. Out of date by hand.
Replaced by 10 attribute types, the Source Generators that read them, and the analyzers that guard them. The generated code does not count as overhead because it is the implementation itself — it replaces code the developer would have written anyway. The analyzer rules do not count as overhead because they are written once, shipped as a NuGet package, and never maintained per-project.
The Visual Summary
The tallest bars — Database Mapping, API Contracts, Architecture Enforcement — are the domains where conventions are most complex, most numerous, and most fragile. They are also the domains where Contention pays for itself fastest.
What 1,625 Lines Means in Practice
1,625 lines is not an abstract number. It is:
- 16 wiki pages that someone must keep accurate, that new hires must read, that tech leads must review during onboarding
- 20+ test files that test conventions, not behavior — tests that break when conventions change, not when features break
- 3-5 CI pipeline stages dedicated to convention enforcement (ArchUnit, contract diffing, coverage gates)
- 1 full-time equivalent week per quarter spent updating documentation when conventions evolve
And it compounds. A team of 5 can keep 680 lines of documentation mostly accurate. A team of 20 cannot. A team of 50 does not even try — the wiki becomes a museum of good intentions, visited by nobody, trusted by nobody, enforced by the subset of developers who were present when the convention was established.
The Drift Timeline
Convention documentation has a predictable lifecycle. This is not cynicism — it is a pattern observable in every codebase older than two years:
Month 0: Convention is established. Wiki page is written. It is accurate.
Month 3: First exception is discovered. A developer needs to deviate from the convention for a legitimate reason. The exception is discussed in Slack, approved in a PR comment, and never added to the wiki. The code now has an undocumented exception to a documented rule.
Month 6: Second developer encounters the first exception's code. Assumes the wiki is wrong, or that the convention has changed. Follows the exception pattern in their own code. Now two classes deviate. Nobody updates the wiki.
Month 9: New hire joins. Reads the wiki. Follows the documented convention. Code review catches that they did not follow the exception pattern. New hire is confused — the wiki says one thing, the codebase says another. Senior developer explains the exception verbally. New hire's onboarding takes an extra day.
Month 12: Team decides to "clean up the wiki." A developer spends half a day updating it. The next sprint, two conventions change. The wiki is out of date again within two weeks.
Month 18: Nobody reads the wiki anymore. New hires are told "just look at how the other services are structured." The wiki still exists. It still appears in onboarding checklists. Nobody checks whether new hires actually read it.
Month 24: A production bug is traced to a convention violation that the enforcement tests did not catch — because the enforcement tests were written for the original convention, not the exception pattern that 40% of the codebase now follows.
This is not a failure of discipline. This is the natural entropy of any system where the documentation and the code are separate artifacts maintained by separate processes. The only way to prevent it is to make the documentation and the code the same artifact. That is what an attribute does.
The Enforcement Paradox
Convention enforcement code has its own irony: it must be maintained with the same discipline it is supposed to enforce.
When a convention changes — perhaps the team decides that repositories no longer need to live in a Repositories/ namespace — the enforcement test must change too. But the developer who changes the convention is not always the developer who updates the test. The test breaks. The team discusses whether to update the test or revert the convention change. Sometimes the test is simply deleted, because "we'll rewrite it later." The enforcement disappears, and the convention is now unguarded.
This paradox is structural, not cultural. It affects diligent teams as much as careless ones. The problem is that convention enforcement is meta-code: code about code. Meta-code has all the maintenance burdens of regular code, plus the additional burden of keeping it in sync with the conventions it describes.
Contention eliminates this paradox because the enforcement (analyzer) and the implementation (SG) are the same package. When the attribute definition changes, the SG and analyzer change together, in the same PR, tested by the same test suite. There is no wiki to update because the attribute IS the documentation. There is no enforcement test to maintain because the analyzer IS the enforcement. There is no sync problem because there is only one artifact.
The Meta-Pattern
Every domain in this series follows the same architecture. Not similar — the same. The details change (DI registration vs. EF configuration vs. endpoint generation), but the flow is identical. This section extracts that flow into a reusable blueprint.
The Five Steps
Every SG+Analyzer pair follows five steps:
- Developer declares intent — by adding an attribute to a class, method, or property
- Source Generator reads the attribute — at compile time, using the Roslyn compilation model
- Source Generator emits new C# source — registrations, configurations, contracts, reports, anything that was previously hand-written boilerplate
- Analyzer inspects the code — for structural violations that the SG cannot fix (wrong access modifiers, illegal dependencies, missing attributes, incorrect usage patterns)
- Analyzer emits diagnostics — compile errors or warnings with exact file, line, column, and a fix suggestion that IDEs can auto-apply
The developer writes step 1. The tooling handles steps 2-5. The compiler enforces the result.
The M2/M3 Connection
If you have read Meta-Metamodeling, this pattern maps directly to the modeling layers:
- The attribute is an M1 instance — a concrete declaration on a concrete class.
[AggregateRoot]onOrderis an M1 fact: "Order is an aggregate root." - The Source Generator is the M2-to-M1 compiler — it reads M1 instances (attributes) and produces M1 artifacts (generated code) according to M2 rules (the generation template).
- The attribute definition (the class
AggregateRootAttribute) is an M2 concept — it defines what "aggregate root" means in your domain model. - The SG framework (Roslyn's
IIncrementalGeneratorAPI) is the M3 meta-metamodel — it defines what a Source Generator is and how it reads the compilation model.
This is not an analogy. It is the same modeling hierarchy that drives the CMF's five-stage generation pipeline. The attribute is a model element. The SG is a model compiler. The M3 layer provides the infrastructure for building model compilers. Whether you are generating EF configurations or DI registrations or API endpoints, the metamodeling structure is identical.
The Template
Every SG+Analyzer pair in this series follows the same code structure. Here it is, abstracted to a reusable template:
The Attribute (M2 concept definition):
[AttributeUsage(AttributeTargets.Class, Inherited = false, AllowMultiple = false)]
public sealed class MyDomainAttribute : Attribute
{
// Properties that parameterize the generation
public string Name { get; }
public MyDomainOption Option { get; set; } = MyDomainOption.Default;
public MyDomainAttribute(string name) => Name = name;
}[AttributeUsage(AttributeTargets.Class, Inherited = false, AllowMultiple = false)]
public sealed class MyDomainAttribute : Attribute
{
// Properties that parameterize the generation
public string Name { get; }
public MyDomainOption Option { get; set; } = MyDomainOption.Default;
public MyDomainAttribute(string name) => Name = name;
}The Source Generator (M2-to-M1 compiler):
[Generator]
public sealed class MyDomainGenerator : IIncrementalGenerator
{
public void Initialize(IncrementalGeneratorInitializationContext context)
{
// Step 1: Find all classes decorated with the attribute
var declarations = context.SyntaxProvider
.ForAttributeWithMetadataName(
"MyNamespace.MyDomainAttribute",
predicate: static (node, _) => node is ClassDeclarationSyntax,
transform: static (ctx, _) => ExtractModel(ctx))
.Where(static m => m is not null);
// Step 2: Generate code for each decorated class
context.RegisterSourceOutput(declarations, static (spc, model) =>
{
var source = GenerateSource(model!);
spc.AddSource($"{model!.ClassName}.g.cs", source);
});
}
private static MyDomainModel? ExtractModel(
GeneratorAttributeSyntaxContext context) { /* ... */ }
private static string GenerateSource(MyDomainModel model) { /* ... */ }
}[Generator]
public sealed class MyDomainGenerator : IIncrementalGenerator
{
public void Initialize(IncrementalGeneratorInitializationContext context)
{
// Step 1: Find all classes decorated with the attribute
var declarations = context.SyntaxProvider
.ForAttributeWithMetadataName(
"MyNamespace.MyDomainAttribute",
predicate: static (node, _) => node is ClassDeclarationSyntax,
transform: static (ctx, _) => ExtractModel(ctx))
.Where(static m => m is not null);
// Step 2: Generate code for each decorated class
context.RegisterSourceOutput(declarations, static (spc, model) =>
{
var source = GenerateSource(model!);
spc.AddSource($"{model!.ClassName}.g.cs", source);
});
}
private static MyDomainModel? ExtractModel(
GeneratorAttributeSyntaxContext context) { /* ... */ }
private static string GenerateSource(MyDomainModel model) { /* ... */ }
}The Analyzer (boundary guardian):
[DiagnosticAnalyzer(LanguageNames.CSharp)]
public sealed class MyDomainAnalyzer : DiagnosticAnalyzer
{
private static readonly DiagnosticDescriptor MissingAttribute = new(
id: "MD001",
title: "Missing [MyDomain] attribute",
messageFormat: "Class '{0}' implements IMyDomain but lacks [MyDomain]",
category: "MyDomain",
defaultSeverity: DiagnosticSeverity.Error,
isEnabledByDefault: true);
public override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics =>
ImmutableArray.Create(MissingAttribute);
public override void Initialize(AnalysisContext context)
{
context.ConfigureGeneratedCodeAnalysis(
GeneratedCodeAnalysisFlags.None);
context.EnableConcurrentExecution();
context.RegisterSymbolAction(AnalyzeNamedType,
SymbolKind.NamedType);
}
private static void AnalyzeNamedType(SymbolAnalysisContext context)
{
var type = (INamedTypeSymbol)context.Symbol;
// Check: does this type implement the interface but lack the attribute?
if (ImplementsMyDomainInterface(type) && !HasMyDomainAttribute(type))
{
var diagnostic = Diagnostic.Create(
MissingAttribute, type.Locations[0], type.Name);
context.ReportDiagnostic(diagnostic);
}
}
}[DiagnosticAnalyzer(LanguageNames.CSharp)]
public sealed class MyDomainAnalyzer : DiagnosticAnalyzer
{
private static readonly DiagnosticDescriptor MissingAttribute = new(
id: "MD001",
title: "Missing [MyDomain] attribute",
messageFormat: "Class '{0}' implements IMyDomain but lacks [MyDomain]",
category: "MyDomain",
defaultSeverity: DiagnosticSeverity.Error,
isEnabledByDefault: true);
public override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics =>
ImmutableArray.Create(MissingAttribute);
public override void Initialize(AnalysisContext context)
{
context.ConfigureGeneratedCodeAnalysis(
GeneratedCodeAnalysisFlags.None);
context.EnableConcurrentExecution();
context.RegisterSymbolAction(AnalyzeNamedType,
SymbolKind.NamedType);
}
private static void AnalyzeNamedType(SymbolAnalysisContext context)
{
var type = (INamedTypeSymbol)context.Symbol;
// Check: does this type implement the interface but lack the attribute?
if (ImplementsMyDomainInterface(type) && !HasMyDomainAttribute(type))
{
var diagnostic = Diagnostic.Create(
MissingAttribute, type.Locations[0], type.Name);
context.ReportDiagnostic(diagnostic);
}
}
}Three files. One attribute, one generator, one analyzer. This is the complete architecture for a Contention layer in any domain. The DI part uses this template with [Injectable]. The validation part uses it with [Validated]. The API part uses it with [TypedEndpoint]. The database part uses it with [AggregateRoot]. The architecture part uses it with [Layer]. Ten domains, one template.
The Template Applied: Ten Domains, One Architecture
To make the meta-pattern concrete, here is how each domain instantiates it. The left column is what the developer writes. The right column is what the compiler produces. The middle is invisible — the SG does it automatically.
| Domain | Developer Writes | SG Generates | Analyzer Guards |
|---|---|---|---|
| DI | [Injectable(Lifetime.Scoped)] on OrderService |
services.AddScoped<IOrderService, OrderService>() in a registration extension method |
Circular dependency detection, missing interface, duplicate registration |
| Validation | [Validated] on CreateOrderCommand |
CreateOrderCommandValidator class with rules derived from required and [Range] properties |
Command without validator, unreachable validation rule, missing required on non-nullable reference |
| API | [TypedEndpoint(Method.Post, "/orders")] on handler |
Minimal API endpoint registration, OpenAPI spec fragment, typed HTTP client method | Mismatched request/response types, undocumented error status, missing authorization |
| Database | [AggregateRoot] on Order entity |
EF IEntityTypeConfiguration<Order>, IOrderRepository interface, factory method, owned entity configs for value objects |
Entity referencing another aggregate's internals, value object with identity, navigation property crossing aggregate boundary |
| Testing | [ForRequirement(typeof(CreateOrderFeature))] on test class |
Compliance matrix row, coverage percentage calculation, traceability report entry | Untested requirement, test referencing nonexistent requirement, acceptance criteria without corresponding test |
| Architecture | [Layer("Domain")] on assembly |
InternalsVisibleTo restrictions, dependency constraint file, layer violation constants |
Domain layer referencing Infrastructure, Application layer bypassing Domain, circular layer dependency |
| Config | [StronglyTypedOptions("Smtp")] on SmtpOptions |
IOptions<SmtpOptions> binder, startup registration, validation logic from [Range]/[Required] |
Missing config section in appsettings, unvalidated option property, type mismatch between config and property |
| Errors | [MustHandle] on Result<Order, OrderError> |
Exhaustive Match() method with one branch per error variant, error catalog entry |
Unmatched result (missing case in switch), swallowed error (result assigned but never checked), exception thrown instead of Result returned |
| Logging | [LoggerMessage(EventId = 1001, Level = LogLevel.Information, Message = "Order created")] |
High-performance static log method with structured parameters pre-compiled | Unstructured _logger.LogInformation($"...") call, sensitive data in log template, duplicate event ID |
| Security | [RequiresPermission(Permission.ManageOrders)] on endpoint |
Policy registration, permission-to-role mapping, authorization attribute on generated endpoint | Unprotected endpoint (no [RequiresPermission]), permission referenced but not defined in enum, role escalation path |
Every row follows the same pattern: declare, generate, guard. The developer's cognitive load is one attribute. The compiler's workload is three steps. The generated code is correct by construction. The analyzer catches the structural violations that the SG cannot prevent.
This is not ten different architectures. It is one architecture, instantiated ten times.
Building Your Own Contention Layer
Not every convention deserves a Source Generator. The meta-pattern is powerful, but building an SG+Analyzer pair is an investment: Roslyn APIs have a learning curve, incremental generators require careful caching, and analyzer testing requires the Microsoft.CodeAnalysis.Testing infrastructure. The question is not "can I build this?" but "should I?"
The Decision Matrix
Four factors determine whether a convention is worth promoting to a Contention layer:
| Factor | Convention Is Enough | Contention Is Worth It |
|---|---|---|
| Instance count | <10 entities following this convention | >10 entities, growing over time |
| Change frequency | New instances added rarely (quarterly) | New instances added regularly (weekly) |
| Violation severity | Cosmetic or easily caught in review | Security, correctness, or data integrity risk |
| Generated code complexity | Convention requires 1-2 lines of boilerplate | Convention requires 10+ lines of boilerplate per instance |
If a convention governs 3 classes and hasn't changed in a year, a wiki page is fine. Write the documentation, move on. The investment in an SG+Analyzer pair would take longer than the cumulative time spent on convention violations.
If a convention governs 50 classes, with 2 new ones added per sprint, and violations cause subtle runtime failures — that is where Contention pays for itself in weeks, not months.
The Break-Even Calculation
The cost of building a minimal SG+Analyzer pair is roughly:
- Source Generator: 4-8 hours for a developer familiar with Roslyn
- Analyzer: 2-4 hours for a single diagnostic rule
- Testing: 4-8 hours for generator snapshot tests + analyzer verification tests
- NuGet packaging: 1-2 hours for the analyzer-as-NuGet-package setup
Total: ~15-25 hours for the first pair. Subsequent pairs are faster because the infrastructure (test helpers, NuGet pipeline, Roslyn utilities) is reusable.
The cost of Convention for a single domain is:
- Documentation: 2-4 hours to write, 1-2 hours per quarter to update
- Enforcement tests: 4-8 hours to write, 2-4 hours per quarter to update
- Developer time lost to violations: 1-4 hours per sprint across the team
If the convention enforcement costs 10 hours per quarter and the SG+Analyzer pair costs 20 hours to build, the break-even point is 2 quarters. After that, the convention overhead drops to zero and the SG+Analyzer pair requires near-zero maintenance.
A Minimal SG Skeleton
For teams starting their first Source Generator, the absolute minimum is smaller than most expect:
[Generator]
public sealed class MinimalGenerator : IIncrementalGenerator
{
public void Initialize(IncrementalGeneratorInitializationContext context)
{
var provider = context.SyntaxProvider.ForAttributeWithMetadataName(
"MyApp.MarkerAttribute",
predicate: static (node, _) => node is ClassDeclarationSyntax,
transform: static (ctx, _) =>
{
var symbol = (INamedTypeSymbol)ctx.TargetSymbol;
return (symbol.ContainingNamespace.ToDisplayString(), symbol.Name);
});
context.RegisterSourceOutput(provider, static (spc, model) =>
{
var (ns, name) = model;
spc.AddSource($"{name}.g.cs", $$"""
namespace {{ns}};
partial class {{name}}
{
// Generated code here
public static string GeneratedInfo => "{{name}} was processed";
}
""");
});
}
}[Generator]
public sealed class MinimalGenerator : IIncrementalGenerator
{
public void Initialize(IncrementalGeneratorInitializationContext context)
{
var provider = context.SyntaxProvider.ForAttributeWithMetadataName(
"MyApp.MarkerAttribute",
predicate: static (node, _) => node is ClassDeclarationSyntax,
transform: static (ctx, _) =>
{
var symbol = (INamedTypeSymbol)ctx.TargetSymbol;
return (symbol.ContainingNamespace.ToDisplayString(), symbol.Name);
});
context.RegisterSourceOutput(provider, static (spc, model) =>
{
var (ns, name) = model;
spc.AddSource($"{name}.g.cs", $$"""
namespace {{ns}};
partial class {{name}}
{
// Generated code here
public static string GeneratedInfo => "{{name}} was processed";
}
""");
});
}
}27 lines. That is a working incremental Source Generator. It finds every class with [Marker], generates a partial class extension with a static property. Replace the template with DI registration, EF configuration, endpoint wiring, or any other boilerplate — the structure stays the same.
A Minimal Analyzer Skeleton
[DiagnosticAnalyzer(LanguageNames.CSharp)]
public sealed class MinimalAnalyzer : DiagnosticAnalyzer
{
private static readonly DiagnosticDescriptor Rule = new(
id: "MY001",
title: "Convention violation",
messageFormat: "'{0}' violates the convention: {1}",
category: "MyDomain",
defaultSeverity: DiagnosticSeverity.Warning,
isEnabledByDefault: true);
public override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics =>
ImmutableArray.Create(Rule);
public override void Initialize(AnalysisContext context)
{
context.ConfigureGeneratedCodeAnalysis(GeneratedCodeAnalysisFlags.None);
context.EnableConcurrentExecution();
context.RegisterSymbolAction(ctx =>
{
var type = (INamedTypeSymbol)ctx.Symbol;
if (ShouldReport(type))
ctx.ReportDiagnostic(
Diagnostic.Create(Rule, type.Locations[0],
type.Name, "reason"));
}, SymbolKind.NamedType);
}
private static bool ShouldReport(INamedTypeSymbol type) => false; // Your logic
}[DiagnosticAnalyzer(LanguageNames.CSharp)]
public sealed class MinimalAnalyzer : DiagnosticAnalyzer
{
private static readonly DiagnosticDescriptor Rule = new(
id: "MY001",
title: "Convention violation",
messageFormat: "'{0}' violates the convention: {1}",
category: "MyDomain",
defaultSeverity: DiagnosticSeverity.Warning,
isEnabledByDefault: true);
public override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics =>
ImmutableArray.Create(Rule);
public override void Initialize(AnalysisContext context)
{
context.ConfigureGeneratedCodeAnalysis(GeneratedCodeAnalysisFlags.None);
context.EnableConcurrentExecution();
context.RegisterSymbolAction(ctx =>
{
var type = (INamedTypeSymbol)ctx.Symbol;
if (ShouldReport(type))
ctx.ReportDiagnostic(
Diagnostic.Create(Rule, type.Locations[0],
type.Name, "reason"));
}, SymbolKind.NamedType);
}
private static bool ShouldReport(INamedTypeSymbol type) => false; // Your logic
}30 lines. That is a working Roslyn analyzer. It registers a symbol action, checks a condition, and reports a diagnostic. The IDE shows a squiggle. The build shows a warning (or error, if you set the severity). The developer gets feedback without reading a wiki.
Together — 57 lines — you have a complete Contention layer. Attribute + Generator + Analyzer. The same architecture used in every part of this series.
Testing the SG+Analyzer Pair
A Contention layer that is not tested is not trustworthy. If the SG generates wrong code, every decorated class is wrong. If the analyzer reports false positives, developers suppress it globally and the enforcement disappears. Testing must be thorough precisely because the impact radius is the entire codebase.
Generator testing uses snapshot testing. Write a C# string that represents a decorated class, run the generator, and compare the output to a verified snapshot. The Microsoft.CodeAnalysis.CSharp.SourceGeneration.Testing package makes this straightforward:
[Fact]
public async Task Generator_Produces_Expected_Output()
{
var source = """
using MyApp;
namespace TestNamespace;
[Marker]
public partial class TestClass { }
""";
var generated = await GeneratorTestHelper.RunGenerator<MinimalGenerator>(source);
// Verify the generated output matches the snapshot
await Verify(generated);
}[Fact]
public async Task Generator_Produces_Expected_Output()
{
var source = """
using MyApp;
namespace TestNamespace;
[Marker]
public partial class TestClass { }
""";
var generated = await GeneratorTestHelper.RunGenerator<MinimalGenerator>(source);
// Verify the generated output matches the snapshot
await Verify(generated);
}When the generation template changes intentionally, you update the snapshots. When it changes accidentally, the test catches it. Snapshot testing is the right strategy because generated code is deterministic — the same input always produces the same output.
Analyzer testing uses the Roslyn DiagnosticVerifier pattern. You provide source code that should trigger a diagnostic and verify it appears at the expected location:
[Fact]
public async Task Analyzer_Reports_Missing_Attribute()
{
var source = """
using MyApp;
namespace TestNamespace;
// This class implements IMyDomain but lacks [MyDomain]
public class {|MY001:BadClass|} : IMyDomain { }
""";
await VerifyAnalyzerAsync<MinimalAnalyzer>(source);
}[Fact]
public async Task Analyzer_Reports_Missing_Attribute()
{
var source = """
using MyApp;
namespace TestNamespace;
// This class implements IMyDomain but lacks [MyDomain]
public class {|MY001:BadClass|} : IMyDomain { }
""";
await VerifyAnalyzerAsync<MinimalAnalyzer>(source);
}The {|MY001:...||} syntax marks where the diagnostic should appear. If the analyzer reports the diagnostic at a different location, or does not report it at all, the test fails.
Both kinds of tests run in milliseconds because they operate on in-memory compilations. They do not require a running application, a database, or a file system. This is another advantage of Contention over Convention: the enforcement tests for conventions are integration tests that take minutes. The enforcement tests for analyzers are unit tests that take milliseconds.
Packaging and Distribution
An SG+Analyzer pair is most effective when packaged as a NuGet package. The consuming project adds one PackageReference and gets the attribute, the generator, and the analyzer automatically. No configuration. No setup. No opt-in.
The NuGet packaging has a specific structure:
<ItemGroup>
<!-- The analyzer and SG must be in the analyzers directory -->
<None Include="$(OutputPath)\MyDomain.Generators.dll"
Pack="true"
PackagePath="analyzers/dotnet/cs" />
<None Include="$(OutputPath)\MyDomain.Analyzers.dll"
Pack="true"
PackagePath="analyzers/dotnet/cs" />
</ItemGroup><ItemGroup>
<!-- The analyzer and SG must be in the analyzers directory -->
<None Include="$(OutputPath)\MyDomain.Generators.dll"
Pack="true"
PackagePath="analyzers/dotnet/cs" />
<None Include="$(OutputPath)\MyDomain.Analyzers.dll"
Pack="true"
PackagePath="analyzers/dotnet/cs" />
</ItemGroup>The analyzers/dotnet/cs path tells NuGet to load these assemblies as analyzers and source generators, not as runtime references. The consuming project never sees the Roslyn APIs — it only sees the attributes and the generated code.
This packaging model means that adopting a Contention layer is a single line in the .csproj file. No wiki page to read. No setup guide to follow. No enforcement tests to copy from a template project. One package reference, and the compiler starts generating and guarding.
When Contention Goes Too Far
Every architectural pattern has a failure mode. Convention's failure mode is drift — the rules stop matching reality. Contention's failure mode is friction — the compiler becomes an obstacle instead of an assistant.
Analyzer Fatigue
A project with 5 analyzer rules feels empowering. A project with 50 feels oppressive. When every class requires 3 attributes and every method triggers 4 diagnostics, developers start doing two things:
- Adding attributes mechanically, without understanding what they mean
- Suppressing analyzers with
#pragma warning disableblocks
Both defeat the purpose. Mechanical attributes are no better than mechanical convention-following — the developer still does not understand the intent. And suppressed analyzers are worse than missing analyzers, because they create a false sense of coverage.
The guideline: keep analyzer rules to the minimum that prevents real violations. If a diagnostic fires frequently and the fix is always the same mechanical change, the SG should generate that code instead of the analyzer complaining about its absence. Analyzers are for structural violations that require human judgment. Generators are for boilerplate that requires no judgment.
Over-Generation
A Source Generator that produces 500 lines of code for every decorated class is not necessarily a good thing. If 400 of those lines are never used, the generator is inflating compile time, bloating the assembly, and making generated code harder to navigate.
The guideline: generate what the developer would have written by hand. No more. If the developer would not have written a ToString() override, the SG should not generate one. If the developer would not have written a JSON converter, the SG should not emit one. The SG replaces boilerplate — it does not invent new functionality that nobody asked for.
Aggressive generation also causes a subtler problem: developers stop understanding what their code does. When 60% of the assembly is generated, debugging requires navigating generated files, understanding generation templates, and reasoning about code that no human wrote. This is manageable when the generated code is simple (DI registrations, EF configurations). It becomes a problem when the generated code is complex (validation logic, state machines, workflow orchestration).
Build Time Impact
Incremental Source Generators are designed to be fast — they cache intermediate results and only regenerate when inputs change. But a poorly written SG that scans the entire compilation on every keystroke can add seconds to the IDE feedback loop. When the feedback loop degrades, developers lose the primary benefit of Contention: instant, in-editor diagnostics.
The guideline: use ForAttributeWithMetadataName (not ForSyntax with broad predicates). Cache aggressively. Avoid accessing Compilation.GetSemanticModel() more than necessary. Profile your SG with dotnet build --no-incremental -bl and inspect the binlog for generator timings.
The Inner Platform Trap
The most dangerous failure mode is building an attribute DSL that becomes its own programming language. When an attribute has 8 optional parameters, 3 enum flags, and a string that accepts a mini-expression language, you have not eliminated convention — you have moved it into attribute syntax that is harder to document and harder to learn than the convention it replaced.
// This has gone too far
[DomainEntity(
Table = "orders",
Schema = "sales",
AuditMode = AuditMode.Full,
SoftDelete = true,
TenantIsolation = TenantMode.Column,
CacheStrategy = "sliding:5m",
Serialization = SerializationMode.SnakeCase | SerializationMode.NullIgnore,
ValidationProfile = "strict",
EventSourcing = true)]
public class Order { }// This has gone too far
[DomainEntity(
Table = "orders",
Schema = "sales",
AuditMode = AuditMode.Full,
SoftDelete = true,
TenantIsolation = TenantMode.Column,
CacheStrategy = "sliding:5m",
Serialization = SerializationMode.SnakeCase | SerializationMode.NullIgnore,
ValidationProfile = "strict",
EventSourcing = true)]
public class Order { }This attribute is not a declaration of intent — it is a configuration file wearing a C# costume. It has the same problem as XML configuration: too many knobs, too many interactions between knobs, and no way to understand the result without reading documentation.
The guideline: an attribute should express what something is, not how it should be configured. [AggregateRoot] says what Order is. The SG decides how to configure it based on the domain model — conventions applied by the generator, not parameters on the attribute. If you need 8 parameters, you need 8 smaller attributes or a configuration object, not one mega-attribute.
Escape Hatches
Even well-designed Contention layers need escape hatches. Sometimes a class genuinely does not fit the pattern. Sometimes a legacy module cannot be migrated yet. Sometimes a third-party library requires a registration that the SG does not support.
The standard escape hatch is #pragma warning disable:
#pragma warning disable INJ001 // This class is registered manually in legacy module
public class LegacyPaymentGateway : IPaymentGateway
{
// Manual DI registration in LegacyModule.cs
}
#pragma warning restore INJ001#pragma warning disable INJ001 // This class is registered manually in legacy module
public class LegacyPaymentGateway : IPaymentGateway
{
// Manual DI registration in LegacyModule.cs
}
#pragma warning restore INJ001This is not a failure. This is the system working as designed. The diagnostic ID documents which rule is being bypassed. The comment documents why. The #pragma restore limits the scope. Code review can scrutinize #pragma disable blocks specifically, rather than hoping to catch convention violations in 2,000 lines of changes.
The guideline: track #pragma disable counts per analyzer rule. If a rule is suppressed more than 10% of the time, the rule is too strict, the SG is too rigid, or the domain has more exceptions than the pattern can handle. Either loosen the rule, add a generator option, or accept that this domain is better served by Convention.
The Debugging Tax
Generated code adds a debugging cost. When a test fails and the stack trace passes through generated code, the developer must read code they did not write. This is not unique to Source Generators — every framework produces stack traces through framework code. But SG-generated code is closer to the developer's domain, which makes the dissonance more jarring.
Mitigations:
- Use
#linedirectives in generated code to map back to the original source. When the developer steps through a generated DI registration, the debugger can jump to the attribute that caused it. - Generate readable code. Minimize clever optimizations. Use the same variable names a human would choose. Add comments in the generated file explaining which attribute triggered which block.
- Make generated files discoverable. In Visual Studio and Rider, generated files appear under the Analyzers node in Solution Explorer. Name the files descriptively:
OrderService.DI.g.cs, notGeneratedCode_7f3a.g.cs. - Provide a
[DebuggerStepThrough]attribute on trivial generated methods (registrations, wiring) so the debugger skips them entirely during step-through debugging.
The debugging tax is real but manageable. It is also temporary — once the developer understands the generation pattern, they stop needing to read the generated code. The same cannot be said of convention enforcement tests, which must be read and understood every time they fail.
A Day in Two Codebases
To make the difference visceral, consider two teams on the same Monday morning. Both have a new developer who joined last week. Both are adding a new aggregate entity — Invoice — to an existing order management system.
Team Convention (wiki + NetArchTest + Scrutor scanning):
The new developer creates Invoice.cs. They put it in src/Domain/Models/ because that is where Order.cs lives. The wiki says entities go in src/Domain/Entities/. Nobody told them about the wiki. The code compiles. They run the tests. A NetArchTest test fails: "Entity Invoice should be in the Entities folder." They move the file. They add a public Guid Id { get; set; } because that is how Order does it. The wiki says entities should inherit from Entity<TId>. They never read that page. They write an EF configuration in src/Infrastructure/Data/Configurations/InvoiceConfiguration.cs because they copied the pattern from OrderConfiguration.cs — but they forget to register it in OnModelCreating. The app compiles. The test suite passes. They push. CI passes. Two days later, someone runs the app and gets a "The entity type 'Invoice' requires a primary key" exception at runtime. The new developer spends 45 minutes finding the missing configuration registration. Total time: 4 hours for a class that took 20 minutes to write.
Team Contention (SG + analyzer + [AggregateRoot]):
The new developer creates Invoice.cs in whatever folder they choose — it does not matter. They add [AggregateRoot] because every entity they have seen has it (the attribute is in every entity file — it is impossible to miss). The SG generates the EF configuration, the repository interface, and the DI registration. The code compiles. They push. CI passes. The app runs. Total time: 25 minutes.
The new developer on Team Contention never read a wiki. Never ran an architecture test. Never wondered which folder to use. The attribute told the compiler what Invoice is. The compiler did the rest. The developer learned the pattern by reading the code — the attributes on existing entities — not by reading documentation that may or may not be accurate.
The Cost-Benefit Matrix
Not every team and not every project needs Contention. This matrix summarizes when Convention is sufficient and when Contention justifies the investment.
| Scenario | Convention Is Enough | Contention Pays Off |
|---|---|---|
| Team size | 2-5 developers who talk daily | 10+ developers, or multiple teams |
| Project age | <1 year, stable conventions | 3+ years, conventions have drifted |
| Domain complexity | Simple CRUD, few entities | Rich domain model, many aggregates |
| Onboarding frequency | New hire every 6+ months | New hire every 1-2 months |
| Violation cost | Code review catches it, low risk | Security breach, data corruption, production outage |
| Boilerplate per instance | 1-5 lines | 10-50 lines |
| Convention count | 3-5 simple rules | 10+ interacting rules |
| Regulatory requirements | None | SOC2, HIPAA, PCI-DSS (auditability matters) |
The sweet spot for Contention is a growing team working on a complex domain with high violation costs and significant boilerplate. The sweet spot for Convention is a small team working on a simple domain where everyone fits in the same room and the wiki has 3 pages.
Most teams start with Convention and graduate to Contention as the pain increases. The inflection point is when the team spends more time maintaining documentation and enforcement tests than building features. When the Convention Tax exceeds the cost of building the SG+Analyzer pair, the investment is obvious.
The Migration Path
Teams rarely adopt Contention for all 10 domains at once. The practical migration path is:
Pick the domain with the highest violation cost. For most teams, this is Security or Database Mapping. A security violation is a breach. A database mapping violation is data corruption. Start where the stakes justify the investment.
Build the SG+Analyzer pair for that domain. Follow the template from the meta-pattern section. Start with one attribute, one generator, one analyzer rule. Ship it as an internal NuGet package.
Measure the before and after. Count convention violations per sprint before adoption. Count them after. If the violations drop to near zero and the team stops asking "where does this go?" — the investment has paid off.
Expand to the next domain. The second pair is faster to build because the infrastructure is in place: the NuGet packaging, the test helpers, the Roslyn utility methods. Each subsequent domain takes roughly half the time of the first.
Stop when the marginal cost exceeds the marginal benefit. Not every domain needs Contention. If the team has 3 simple conventions that rarely cause violations, leave them as conventions. The goal is not purity — it is productivity.
The Hybrid Model
Most mature codebases will end up with a hybrid: Contention for high-stakes, high-frequency domains (Security, Database, Architecture) and Convention for low-stakes, low-frequency domains (Logging format, test organization). This is not a compromise — it is the optimal allocation of engineering effort. Build the guardrails where the cliff is steepest.
Lessons from Ten Domains
Building the SG+Analyzer pairs described in this series revealed several patterns that only become visible when you apply the same architecture across many different domains.
Lesson 1: The Attribute Is the API
The most important design decision is not the generator template or the analyzer rule — it is the attribute surface. The attribute is what developers see, type, and think about. If the attribute is confusing, the entire Contention layer fails regardless of how correct the generated code is.
Good attributes are:
- One concept, one attribute.
[AggregateRoot]means one thing.[DomainEntity(IsAggregate = true, IsValueObject = false)]means nothing without documentation. - Named after what the thing IS, not what the generator DOES.
[Injectable]describes the class's role in the DI system.[GenerateDIRegistration]describes an implementation detail of the tooling. - Parameterized only when the parameter changes behavior.
[Injectable(Lifetime.Scoped)]has one parameter because the lifetime varies between services and must be explicit.[AggregateRoot]has zero parameters because the generation is fully determined by the class structure.
Lesson 2: Generators Should Be Invisible
The best Source Generator is one the developer never thinks about. They add the attribute, they write their business logic, and the generated code silently appears — correct, complete, invisible. The developer should not need to open the generated .g.cs file to understand their own code.
This means generated code should follow the same patterns that a senior developer would write by hand. The DI registration should use the same services.AddScoped<>() syntax. The EF configuration should use the same builder.HasKey() API. The endpoint registration should look like a normal Minimal API call. When a developer does open the generated file — to debug, to learn, to verify — they should see familiar code, not an alien syntax that requires its own documentation.
Lesson 3: Analyzer Messages Are Documentation
The text of an analyzer diagnostic is the most-read documentation in the entire system. More developers will read "DDD001: Entity 'Money' cannot reference aggregate 'Order' directly — use the aggregate's ID instead" than will ever read the wiki page explaining aggregate boundaries.
This means analyzer messages must be:
- Specific. Not "convention violation" but "Entity 'X' references aggregate 'Y' directly."
- Prescriptive. Not "this is wrong" but "use the aggregate's ID instead."
- Linked to a help URL (via the
HelpLinkUriproperty onDiagnosticDescriptor) that explains the rule in detail for developers who want to understand the why.
The analyzer message is not an error report — it is a teaching moment. Every time a developer sees a diagnostic, they learn a rule. Over time, the diagnostics become rare because the developers have internalized the patterns. The analyzer teaches itself out of a job — which is exactly what good documentation should do, but rarely does.
Lesson 4: Start with Warnings, Graduate to Errors
When introducing a Contention layer to an existing codebase, setting every analyzer rule to DiagnosticSeverity.Error on day one will produce hundreds of compile errors and make the team hate the tool before it has a chance to help.
The migration strategy is:
- Week 1: Ship all rules as
Warning. The build succeeds. The IDE shows yellow squiggles. Developers see the warnings but are not blocked. - Week 2-4: The team fixes warnings in new code and gradually in existing code. The warning count drops.
- Month 2: Promote the most important rules (security, data integrity) to
Error. The build fails for new violations but existing code has been cleaned up. - Month 3+: Promote remaining rules to
Erroras the warning count reaches zero.
This gradient gives the team time to learn the rules, clean up the codebase, and build confidence in the tooling — all before the compiler starts refusing to build.
Lesson 5: The SG and Analyzer Must Be Tested Together
It is tempting to test the SG in isolation and the analyzer in isolation. But the most interesting bugs occur at the intersection: the analyzer flags code that the SG generated, or the SG generates code that the analyzer does not recognize as compliant.
Integration tests that run both the SG and analyzer on the same test compilation catch these interaction bugs. The test provides a decorated class, runs the SG to produce generated code, then runs the analyzer on the combined compilation (original + generated). If the analyzer reports a diagnostic on the generated code, the test fails — because the SG and analyzer disagree about what correct code looks like.
Lesson 6: Version Your Attribute Contract
Attributes are an API. Consuming projects depend on the attribute's constructor signature, property names, and enum values. If you rename a property or remove an enum value, every consuming project breaks.
Treat attribute types with the same versioning discipline as public APIs:
- Never remove an attribute property in a minor version. Deprecate it with
[Obsolete]and remove it in the next major version. - Never rename an enum value. Add the new name and mark the old one
[Obsolete]. - Use nullable properties for optional features instead of default values that might be confused with intentional choices.
- Document breaking changes in the package release notes, the same way you would document a breaking change in a public API.
The attribute is the contract between the developer and the generation pipeline. If the contract is unstable, the trust in the tooling erodes. Developers who have been burned by attribute changes will resist adopting new Contention layers.
Lesson 7: The Attribute Should Survive a Domain Discussion
The ultimate test of an attribute's quality is whether a non-developer — a product owner, a domain expert, a security auditor — can read the attribute and understand what it means.
[AggregateRoot] survives this test. A domain expert knows what an aggregate root is. They can look at the code and verify that Order is correctly marked as an aggregate root.
[GenerateEfConfiguration(UseSplitQueries = true, CascadeDelete = CascadeMode.Restrict)] does not survive this test. It is an implementation detail that only a developer familiar with EF Core can parse.
The attribute should be a bridge between the domain model and the code model. When it is, it serves as documentation for both audiences. When it is not, it is just configuration in a different syntax.
The Master Evolution
This diagram shows every domain across all four eras — what each domain requires, and how the requirements change as you move from Code to Contention.
In Era 3, every domain requires three things: the implementation, the documentation, and the enforcement. In Era 4, every domain requires one thing: the attribute. The SG produces the implementation. The analyzer replaces the enforcement. The attribute replaces the documentation.
The SG+Analyzer Meta-Pattern
A detailed view of the reusable architecture, showing how data flows from the developer's declaration through generation and analysis to the final compilation.
This is the architecture. Every SG+Analyzer pair in this series — from [Injectable] to [RequiresPermission] — is an instantiation of this pattern with different attribute definitions, different generation templates, and different diagnostic rules. The Roslyn infrastructure is shared. The meta-pattern is shared. Only the domain-specific logic changes.
The Trust Gradient
The final diagram positions all 10 domains on a spectrum from "trust the developer" (Convention) to "trust the compiler" (Contention). The position reflects how dangerous a violation is and how much the domain benefits from compile-time enforcement.
Reading the Gradient
Trust the Developer (green): Logging and Configuration. A logging violation — an unstructured log call, a missing correlation ID — is annoying but not catastrophic. The system still works. A configuration violation — a missing options validation — surfaces at startup, not in production traffic. These domains benefit from Contention (Microsoft's own [LoggerMessage] proves that), but the urgency is low. If your team has limited SG investment budget, these domains can wait.
Trust Code Review (blue): DI, Validation, Testing. A DI violation — a missing registration, a wrong lifetime — causes a runtime exception that is caught in the first integration test or the first manual test. It is painful but recoverable. A validation violation — a command without a validator — is caught when the first invalid request arrives. A testing violation — an untested requirement — is caught during quality review. These domains have medium violation cost and medium convention overhead. Contention is a significant quality improvement but not an existential necessity.
Trust CI/CD (orange): API Contracts, Architecture, Error Handling. An API contract violation — a missing field, a changed response shape — is a breaking change for consumers. It may not surface until a downstream service deploys and discovers the inconsistency. An architecture violation — a domain layer referencing infrastructure — creates a coupling that compounds over months, making the system increasingly rigid. An error handling violation — a swallowed Result, an unmatched error case — creates silent failures that are difficult to diagnose. These domains have high violation cost and high convention overhead. Contention transforms the development experience.
Trust the Compiler (red): Database Mapping and Security. A database mapping violation — an entity missing its configuration, a value object stored incorrectly — can corrupt data. Data corruption is permanent. You cannot un-corrupt a production database with a hotfix. A security violation — an unprotected endpoint, a missing permission check — is a breach vector. Breaches have legal, financial, and reputational consequences that dwarf any engineering cost. These domains should be the first to adopt Contention. The compiler must enforce these rules because the cost of a human mistake is unacceptable.
Your Team's Gradient
Every team's gradient is different. A healthcare startup might push Validation all the way to the right (HIPAA compliance). A financial services team might push Error Handling there (transaction integrity). A SaaS platform might push API Contracts to the right (breaking changes affect paying customers). The gradient is not fixed — it reflects the team's risk profile.
To build your team's gradient, ask one question for each domain: What is the worst thing that happens if someone violates this convention?
- If the answer is "the build fails" — Convention is enough.
- If the answer is "a test fails" — Convention is probably enough.
- If the answer is "a customer sees an error" — Contention is worth considering.
- If the answer is "data is corrupted" or "security is compromised" — Contention is not optional.
The Thesis Restated
This series began with a claim: Convention is expensive twice. Once for the documentation. Once for the enforcement code. Both drift independently from the codebase they describe. Both are maintained by different people at different times. Both fail silently — the wiki becomes wrong, the test becomes outdated, the convention stops matching reality, and nobody notices until a new developer asks "why does this test fail when I put my entity in a different folder?"
Ten domains later, the claim is measured. 1,625 lines of convention overhead across 10 domains. 680 lines of documentation that someone must keep accurate. 945 lines of enforcement code that tests conventions, not behavior. All of it replaced by 10 attribute types, 10 Source Generators, and 20 analyzer rules that ship as NuGet packages and require zero per-project maintenance.
But the numbers are not the point. The numbers will be different for every team, every project, every domain. The point is the structural argument:
Convention is an honor system. It works when everyone knows the rules, reads the wiki, follows the patterns, and catches each other's mistakes in code review. Honor systems work in small, stable, experienced teams where trust is high and turnover is low.
Honor systems do not scale. When the team grows, when new hires join, when deadline pressure mounts, when the project enters its third year and the developers who wrote the wiki have moved on — the honor system breaks down. Not dramatically. Gradually. A missed convention here. An outdated wiki page there. An enforcement test that nobody updates because nobody remembers why it exists.
Convention requires policing. Because the honor system is fragile, teams add enforcement: ArchUnit, NetArchTest, custom CI scripts, code review checklists. This policing is expensive — it must be written, maintained, and kept in sync with the conventions it polices. And it runs at test time or CI time, not compile time, so the feedback loop is slow.
Contention replaces the honor system with a type system. An attribute is not an invisible rule. It is a visible declaration, attached to the code it describes, checked by the compiler at the moment of authoring. A Source Generator is not a wiki page — it is a code emitter that produces the correct implementation from the declaration, every time, without drift. An analyzer is not a test that runs in CI — it is a compile-time check that produces a red squiggle in the editor, with a message, a location, and a fix suggestion, before the developer even saves the file.
The type system does not get tired. It does not skip the checklist. It does not forget the wiki page. It does not miss the convention violation buried in line 847 of a 2,000-line pull request. It enforces the same rules on every build, every developer, every branch, every Monday morning and every Friday afternoon.
Convention over Configuration was progress. It eliminated XML hell and replaced it with naming rules. That was the right move in 2005.
But the naming rules became invisible rules. The invisible rules required documentation. The documentation required enforcement. The enforcement required maintenance. And the total cost — the Convention Tax — grew quietly, compounding with every new convention, every new team member, every year of project age.
Consider the concrete progression. A team adopts EF Core. The framework has conventions for table naming, column naming, relationship discovery. The team adds their own conventions: aggregate roots get their own configuration classes, value objects are owned entities, soft delete is implemented via a global query filter. This is reasonable. This is how Convention is supposed to work.
Six months later, the team has 30 entities. The conventions are documented in a wiki page — last updated four months ago by a developer who has since left the team. The enforcement tests cover 20 of the 30 entities — the other 10 were added after the tests were written and nobody updated them. Two entities violate the value-object-as-owned-entity convention because a junior developer did not read the wiki. The violations were not caught in code review because the reviewer did not know the convention either.
Now imagine the same team with [AggregateRoot] and [ValueObject] attributes. The wiki page does not exist because the attribute definition IS the documentation. The enforcement tests do not exist because the analyzer IS the enforcement. The two violating entities do not exist because the analyzer would have flagged them as compile errors the moment the junior developer saved the file. Not in code review. Not in CI. In the editor, with a red squiggle and a message: "DDD002: Type 'Money' is marked as [ValueObject] but has an 'Id' property. Value objects must not have identity."
That is the difference between an honor system and a type system. The honor system depends on everyone reading the wiki, everyone following the conventions, everyone catching violations in code review. The type system depends on the compiler, which is the same for every developer, every day, every build.
Contention over Convention is the next step. Not because Convention is wrong — Convention was essential. But because Convention's cost structure does not survive contact with scale, time, and team growth. Contention's cost structure does, because it shifts the work from humans to the compiler. And the compiler works for free.
The Audit: Look at Your Wiki
Here is an exercise. Open your team's wiki, Confluence space, Notion workspace, or whatever artifact stores your architectural conventions. Count:
- How many pages describe coding conventions (naming, folder structure, patterns to follow)?
- How many pages describe architecture rules (layer dependencies, module boundaries, technology restrictions)?
- How many of those pages were updated in the last 6 months?
- How many pages have a corresponding enforcement test in the codebase?
- How many enforcement tests have been updated since the convention last changed?
For most teams, the answers follow a pattern: many convention pages, few recently updated, fewer with enforcement tests, and almost none where the enforcement test matches the current convention.
That gap — between the convention as documented and the convention as enforced — is the Convention Tax. It is the cost your team pays in confusion, rework, code review arguments, and production incidents caused by convention violations that nobody caught.
Now imagine replacing each of those wiki pages with an attribute. The attribute IS the convention. It cannot drift from the code because it IS the code. It does not require a separate enforcement test because the analyzer IS the enforcement. It does not require onboarding documentation because the IDE shows the attribute, the generated code, and the diagnostic messages — all in context, all at authoring time.
That is what Contention offers: the elimination of the gap between documentation and implementation, between convention and enforcement, between what the team says the rules are and what the compiler actually checks.
What This Series Did Not Cover
Intellectual honesty requires acknowledging the boundaries of the argument.
Contention is a .NET/C# pattern. Source Generators and Roslyn Analyzers are Roslyn features. Other ecosystems have equivalents — Rust's proc macros, TypeScript's compiler plugins, Java's annotation processors — but the specific implementation described in this series is .NET-specific. The meta-pattern (attribute + code generation + compile-time analysis) is universal. The tooling is not.
Contention requires a compilation step. Dynamically typed languages (Python, JavaScript, Ruby) do not have a compiler phase where SG+Analyzers can run. Convention is the dominant paradigm in those ecosystems for good reason — there is no compiler to do the work. TypeScript's type system provides some of the benefits, but it lacks the code generation step that makes Contention fully automatic.
Contention does not eliminate all documentation. The attribute replaces the "how to follow this convention" documentation. It does not replace the "why we chose this architecture" documentation. ADRs still have value — they capture the reasoning behind decisions. What Contention eliminates is the ADR's operational section: "to comply with this decision, do X." The "why we made this decision" section remains useful and is not replaceable by any tooling.
Contention has an adoption curve. Teams unfamiliar with Roslyn APIs will spend 2-4 weeks building their first SG+Analyzer pair. That is a real cost. For some teams, the Convention Tax is lower than the adoption cost, and Convention remains the right choice. The claim is not that Contention is always better — the claim is that Contention has a lower ongoing cost for domains with high instance counts, high violation severity, and high boilerplate.
This series used simplified line counts. The Grand Convention Tax Table estimated overhead by counting lines of documentation and enforcement code. Real-world overhead also includes context-switching costs (searching the wiki), communication costs (asking colleagues about conventions), and opportunity costs (time spent on convention violations instead of features). The table is conservative — the true cost of Convention is higher than 1,625 lines.
Where to Go From Here
If this series has convinced you, start with the domain that hurts most. Measure the Convention Tax — count the wiki pages, count the enforcement tests, count the hours spent on convention violations per sprint. Then build the SG+Analyzer pair for that domain.
The One-Week Pilot
The fastest way to evaluate Contention for your team is a one-week pilot:
- Day 1: Pick one domain. Count the convention documentation and enforcement code.
- Day 2-3: Build the SG+Analyzer pair using the template from this part. Start with one attribute, one generator, one analyzer rule.
- Day 4: Apply the attribute to 5-10 existing classes. Verify the generated code matches what was hand-written.
- Day 5: Delete the hand-written code that the SG now generates. Delete the enforcement test that the analyzer now replaces. Measure the reduction.
If the pilot eliminates measurable overhead, expand to the second domain. If it does not — if the convention is simple enough that the SG adds more complexity than it removes — stop. Convention is the right answer for that domain.
If you want the full picture:
- Part I: The Four Eras — the framework and the Convention Tax formula
- Part II: Dependency Injection — the simplest domain, the clearest example
- Part III: Validation — from FluentValidation to
[Validated] - Part IV: API Contracts — the boundary where drift becomes a breaking change
- Part V: Database Mapping — the flagship DDD example
- Part VI: Testing and Requirements — the compliance matrix
- Part VII: Architecture Enforcement — from ADRs to
[Layer] - Part VIII: Configuration — from
IOptions<T>to[StronglyTypedOptions] - Part IX: Error Handling — from
Result<T>to[MustHandle] - Part X: Logging and Security — two domains, one pattern
If you want the philosophical foundation:
- Don't Put the Burden on Developers — the principle that drives every decision in this series: structural fixes over discipline
- Meta-Metamodeling — the M2/M3 theory that explains why the attribute-SG-analyzer architecture is not accidental but a natural consequence of modeling layers
- Requirements as Code — the most complete implementation of the Contention pattern, where requirements, acceptance criteria, and compliance validation all live in the type system
For further reading on the patterns that Contention enables:
- Builder Pattern — Source Generator patterns in practice, showing how
[Builder]generates immutable constructors - DDD — Domain-Driven Design with the type system, where
[AggregateRoot]and[ValueObject]drive the flagship example from Part V - Result Pattern — the Result type that Part IX builds upon with
[MustHandle]exhaustive matching - Quality to Its Finest — multi-layer quality gates that complement Contention's compile-time enforcement with runtime and CI validation
- From Mud to DDD — brownfield DDD migration, where Contention provides the safety net for incremental refactoring
The Numbers That Matter
For those who skipped to the end, here are the numbers from this series:
| Metric | Convention | Contention |
|---|---|---|
| Documentation lines (across 10 domains) | ~680 | 0 (attribute IS the documentation) |
| Enforcement code lines (across 10 domains) | ~945 | 0 (analyzer IS the enforcement) |
| Total overhead lines | ~1,625 | ~0 |
| Feedback loop | Test time (minutes) or CI (minutes-to-hours) | Compile time (seconds) |
| Drift risk | High (documentation and code are separate) | Zero (attribute and generation are the same artifact) |
| Onboarding cost | Read wiki + learn conventions + memorize rules | Read attributes on existing code + follow diagnostics |
| Maintenance cost | Continuous (wiki updates, test updates, convention reviews) | One-time (SG+Analyzer pair, shipped as NuGet) |
| Violation detection | Code review (human) or CI (delayed) | IDE (instant, in-editor) |
These numbers are estimates from a real codebase with ~50 domain entities, ~200 services, and ~40 API endpoints. Your numbers will differ. The ratios will not.
The Final Principle
Don't put the burden on developers.
Don't ask them to read a wiki. Don't ask them to remember 10 naming conventions. Don't ask them to run architecture tests before pushing. Don't ask them to keep documentation in sync with code. Don't ask them to police each other in code review.
Give them an attribute. Let the Source Generator produce the code. Let the analyzer guard the boundaries. Let the type system do the work that humans forget, skip, and get wrong.
Twenty years ago, the industry asked: "What if we stopped configuring everything in XML?" Convention over Configuration was born.
Today, the question is: "What if we stopped documenting and policing invisible rules?" Contention over Convention is the answer.
Convention is an honor system with expensive policing.
Contention is a type system with free generation.
The compiler is the architect. Let it build.
This is the final part of the Contention over Convention over Configuration over Code series. Start from Part I: The Four Eras for the full argument.