Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

The Gotcha — To Document, First You Type

"Everyone wants auto-generated documentation. Nobody wants to type the metadata that makes it possible."


The Documentation Paradox

Every team has the same conversation:

"Can we auto-generate our documentation?"

The answer is always the same: from what? Your wiki pages are prose. Your README files are Markdown. Your architecture diagrams are hand-drawn in draw.io. There is nothing typed — nothing structured, nothing machine-readable, nothing the compiler can introspect.

You cannot auto-generate documentation from nothing. The prerequisite is a typed system — structured metadata that a tool can read, validate, and transform.

The gotcha is the loop:

Manual documentation → drifts from code
          ↓
"We need auto-generated docs""From what? We have no typed metadata"Build typed DSLs (attributes, SGs, analyzers)
          ↓
Documentation becomes a compiler output
          ↓
Zero drift. Zero manual authoring.

To get out of the loop, you must invest in the typing. That is the cost. Everything after that is free.


You DO Need a New Language

But you don't need to invent one.

C# already provides the complete infrastructure for building domain-specific languages. Not a separate file format. Not a custom parser. C# itself, extended via its compiler infrastructure:

Attributes Declare Intent

[AttributeUsage(AttributeTargets.Class)]
public sealed class AggregateRootAttribute : Attribute
{
    public string BoundedContext { get; init; } = "";
    public string Description { get; init; } = "";
}

// Usage: a developer declares intent on their domain class
[AggregateRoot(BoundedContext = "Ordering", Description = "Represents a customer order")]
public partial class Order { ... }

Every property on that attribute is a documentation field. BoundedContext tells the architecture diagram generator where Order lives. Description becomes the first line of the generated docs. The developer wrote zero documentation — they declared intent, and the compiler does the rest.

Source Generators Produce Artifacts

[Generator]
public sealed class DddGenerator : IIncrementalGenerator
{
    public void Initialize(IncrementalGeneratorInitializationContext context)
    {
        var aggregates = context.SyntaxProvider
            .ForAttributeWithMetadataName(
                "Cmf.Ddd.AggregateRootAttribute",
                predicate: static (node, _) => node is ClassDeclarationSyntax,
                transform: static (ctx, _) => ExtractAggregateInfo(ctx));

        // Generate entity implementation, repository, EF config...
        context.RegisterSourceOutput(aggregates.Collect(), GenerateCode);

        // ...AND documentation
        context.RegisterSourceOutput(aggregates.Collect(), GenerateDocs);
    }
}

The same SG that generates Order.g.cs (the entity implementation) can also generate docs/Order/architecture.md (the architecture documentation). Same input. Different output. Zero additional human effort.

Analyzers Guard Boundaries

[DiagnosticAnalyzer(LanguageNames.CSharp)]
public sealed class DddAnalyzer : DiagnosticAnalyzer
{
    private static readonly DiagnosticDescriptor DDD001 = new(
        "DDD001",
        "Aggregate root must be partial",
        "'{0}' is marked [AggregateRoot] but is not partial — source generator cannot extend it",
        "DDD",
        DiagnosticSeverity.Error,
        isEnabledByDefault: true);

    // ...
}

// The diagnostic message itself IS documentation:
// error DDD001: 'Order' is marked [AggregateRoot] but is not partial
// — source generator cannot extend it

Every analyzer diagnostic is a documented rule. The error message explains what went wrong and why. No separate coding standards wiki needed — the compiler tells you the standard when you violate it.

Projects Compose DSLs

<!-- Each DSL is a publishable NuGet -->
<ProjectReference Include="Cmf.Ddd.Lib" />
<ProjectReference Include="Cmf.Requirements.Lib" />
<ProjectReference Include="Cmf.Testing.Lib" />
<ProjectReference Include="Cmf.Api.Lib" />
<!-- Add the Document DSL to generate docs from ALL of the above -->
<ProjectReference Include="Cmf.Document.Lib" />

A <ProjectReference> is DSL composition. Adding Cmf.Document.Lib to your project enables documentation generation for every other DSL you reference. No configuration files. No plugin systems. Just a project reference.


Living Proof: This Website

This pattern isn't theoretical. This very website demonstrates it — in TypeScript instead of C#.

The site's build pipeline (scripts/build-static.ts) reads typed content metadata and generates everything automatically:

This Website (TypeScript) Enterprise C# Equivalent
interface TocItem { title, path, tags, date } [Feature(Title = "...", Tags = [...])] attribute
interface Heading { slug, text, level } [AcceptanceCriteria("...")] abstract method
buildMetaTags(item)<meta> tags IIncrementalGenerator.md files
buildJsonLd(item)<script type="ld+json"> DocumentStrategy<T>.Generate() → structured output
renderMermaid(block) → dark/light SVGs MermaidDocOutput from [Composition] attributes
scrollSpyMachine → pure state machine DiagnosticAnalyzer → pure analysis
IO interface → dependency injection DocumentContext → injected into strategies
npm run build:static dotnet build → SGs emit docs

The site's TocItem interface is a typed DSL concept. The build pipeline reads title, path, tags, date, headings from frontmatter YAML — and generates:

  • toc.json — hierarchical table of contents with headings and slugs
  • SEO meta tags<title>, <meta description>, Open Graph, Twitter Card
  • JSON-LD structured data — schema.org TechArticle for each blog post
  • Mermaid SVGs — pre-rendered diagrams in dark and light variants
  • Sidebar HTML — expandable tree with scroll spy and prefetch state machines

Zero manual documentation. The types ARE the documentation source. The build IS the doc generator.

This is the TypeScript proof-of-concept. The C# version — with DDD, Requirements, Testing, API, and Ops DSLs — scales it to enterprise.


The Loop in Practice

Every organization goes through the same progression:

Stage 0: No Types, Manual Docs

📁 wiki/
├── architecture.md           ← Last updated: 8 months ago
├── deployment-guide.md       ← Missing 3 steps added in Q2
├── api-reference.md          ← Describes endpoints that no longer exist
└── onboarding.md47 pages. Nobody finishes it.

Documentation drifts from code within weeks. Keeping it current is a full-time job. Not keeping it current is a source of production incidents.

Stage 1: Add ONE Attribute

[AggregateRoot]
public partial class Order { }

Now a Source Generator can:

  • Generate the entity implementation (Order.g.cs)
  • Generate the EF Core configuration (OrderConfiguration.g.cs)
  • Generate the repository interface and implementation
  • Generate an architecture diagram showing Order's relationships
  • Generate a documentation page describing Order's properties and events

One attribute. Five generated artifacts. The developer typed one line of metadata.

Stage 2: Add MORE Attributes

[AggregateRoot(BoundedContext = "Ordering")]
[ForRequirement(typeof(OrderProcessingFeature))]
public partial class Order
{
    [EntityId] public partial OrderId Id { get; }
    [Composition] public partial IReadOnlyList<OrderLine> Lines { get; }
    [DomainEvent(typeof(OrderCreatedEvent))]
    public Result Create(CustomerId customer, IReadOnlyList<OrderLineDto> lines) { ... }
}

Now the traceability matrix links Order to its requirement, its tests, its API endpoints, and its deployment steps. The architecture diagram shows compositions and domain events. The API documentation includes requirement references.

Stage 3: Own C# as Your DSL

// Every attribute is a documentation source.
// Every SG is a documentation generator.
// Every analyzer diagnostic is a documented rule.

// The compiler IS the technical writer.
// dotnet build IS the documentation pipeline.

What "Typed" Gives You for Documentation

C# Construct Documentation Value
Attribute property A documentation field (Description, BoundedContext, Severity)
Generic constraint A relationship (Document<Feature<T>> where T : FeatureBase)
typeof() reference A cross-reference (ForRequirement(typeof(OrderFeature)))
nameof() reference A link to a specific member (Verifies(typeof(F), nameof(F.AC1)))
Enum value A classification (MigrationKind.Schema, AlertSeverity.Critical)
Analyzer diagnostic A documented rule with explanation and fix suggestion
<ProjectReference> DSL composition — which DSLs are active in this project

Every construct has a documentation interpretation. The Document DSL (Part III) reads all of them.


The Three Layers

This series builds three layers, each on top of the previous:

Layer 1: The Infrastructure

C# compiler infrastructure — attributes, source generators, Roslyn analyzers, project references. This is the platform. It already exists.

Layer 2: Domain DSLs

Typed DSLs built on Layer 1, each published as a NuGet:

  • Dev-side (Part II): DDD, Requirements, Testing, API
  • Ops-side (Parts IV–V): Deployment, Migration, Observability, Configuration, Resilience

Layer 3: The Document DSL

A DSL that documents other DSLs (Part III). Document<T> introspects any Layer 2 DSL and generates appropriate documentation. The final DSL — the one that reads all others and produces human-readable output from machine-readable types.


DX Preview: Three Lines to Documentation

Here is the endgame — what you get when all three layers are in place:

// 1. Add attributes to your domain class
[AggregateRoot(BoundedContext = "Ordering")]
[ForRequirement(typeof(OrderProcessingFeature))]
public partial class Order { ... }

// 2. Add one project reference
// <ProjectReference Include="Cmf.Document.Lib" />

// 3. Run dotnet build
// Output:
//   docs/Order/architecture.md      ← entity diagram, composition tree
//   docs/Order/requirements.md      ← traceability matrix, AC table
//   docs/Order/api.md               ← endpoint docs with requirement links
//   docs/Order/tests.md             ← test coverage per acceptance criteria
//   docs/Order/deployment/runbook.md ← deployment steps, rollback procedures
//   docs/Order/index.md             ← cross-referenced index

// That's it. Zero configuration. The types ARE the doc source.
// The compiler IS the doc generator. dotnet build IS the doc pipeline.

The rest of this series shows how to build each layer, from the dev-side DSLs you already have (Part II) to the Document DSL that reads them (Part III) to the Ops DSLs that extend the lifecycle to production (Parts IV–VI) to the complete end-to-end example (Part VIII).