Architecture: Six Pillars vs Six DSLs
Both approaches decompose the problem into six units. Both cover requirements, testing, coding standards, and documentation. But the nature of those units is fundamentally different: one is a library of documents; the other is a set of compilers.
The Six Pillars (Spec-Driven)
The cogeet-io framework organizes knowledge into six text-based specification files:
┌───────────────────────────────────────────────────────────────────┐
│ 1. Product-Requirements-Document-Template.txt │
│ Defines: features, user stories, ACs, priorities, constraints │
│ Format: structured text with placeholders │
│ Size: ~300 fields across 15 sections │
│ Consumed by: humans and AI agents │
└───────────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────────────────────────────────────────────┐
│ 2. Specification-as-Code.txt │
│ Defines: build stages, task definitions, validation gates │
│ Format: 5 stages × 13 tasks with failure strategies │
│ Size: ~200 configuration entries │
│ Consumed by: build pipeline, AI agents │
└───────────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────────────────────────────────────────────┐
│ 3. Context-Engineering-as-Code.txt │
│ Defines: context sources, assembly strategies, validation │
│ Format: 3 strategies, 4 context sources, quality gates │
│ Size: 200+ defined structures │
│ Consumed by: AI orchestration layer │
└───────────────────────────────────────────────────────────────────┘
│
├────────────────┬────────────────┐
▼ ▼ ▼
┌──────────────┐ ┌─────────────────┐ ┌─────────────────────────────┐
│ 4. Testing │ │ 5. Documentation│ │ 6. Coding Best Practices │
│ as Code │ │ as Code │ │ as Code │
│ │ │ │ │ │
│ 15+ testing │ │ Auto-generation │ │ SOLID, DRY, KISS, YAGNI │
│ strategies │ │ Living docs │ │ Language-specific rules │
│ Metrics │ │ Quality metrics │ │ Multi-language support │
│ CI/CD gates │ │ Maintenance │ │ Rust, C#, Python, JS │
└──────────────┘ └─────────────────┘ └─────────────────────────────┘┌───────────────────────────────────────────────────────────────────┐
│ 1. Product-Requirements-Document-Template.txt │
│ Defines: features, user stories, ACs, priorities, constraints │
│ Format: structured text with placeholders │
│ Size: ~300 fields across 15 sections │
│ Consumed by: humans and AI agents │
└───────────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────────────────────────────────────────────┐
│ 2. Specification-as-Code.txt │
│ Defines: build stages, task definitions, validation gates │
│ Format: 5 stages × 13 tasks with failure strategies │
│ Size: ~200 configuration entries │
│ Consumed by: build pipeline, AI agents │
└───────────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────────────────────────────────────────────┐
│ 3. Context-Engineering-as-Code.txt │
│ Defines: context sources, assembly strategies, validation │
│ Format: 3 strategies, 4 context sources, quality gates │
│ Size: 200+ defined structures │
│ Consumed by: AI orchestration layer │
└───────────────────────────────────────────────────────────────────┘
│
├────────────────┬────────────────┐
▼ ▼ ▼
┌──────────────┐ ┌─────────────────┐ ┌─────────────────────────────┐
│ 4. Testing │ │ 5. Documentation│ │ 6. Coding Best Practices │
│ as Code │ │ as Code │ │ as Code │
│ │ │ │ │ │
│ 15+ testing │ │ Auto-generation │ │ SOLID, DRY, KISS, YAGNI │
│ strategies │ │ Living docs │ │ Language-specific rules │
│ Metrics │ │ Quality metrics │ │ Multi-language support │
│ CI/CD gates │ │ Maintenance │ │ Rust, C#, Python, JS │
└──────────────┘ └─────────────────┘ └─────────────────────────────┘What Each Pillar Contains
Pillar 1: Product Requirements Document — A comprehensive template with sections for functional requirements (features, user stories, acceptance criteria), non-functional requirements (performance, security, scalability), technical constraints, quality standards, and deployment strategy. It's a form you fill out. Every field has a placeholder like "primary_programming_language" or "target_operating_system".
Pillar 2: Specification as Code — Defines five build stages (setup, core implementation, integration, testing, packaging) with 13 tasks distributed across them. Each task has a type (CodeGeneration, Testing, Verification), a failure strategy (halt, sequential_debug), and dependencies on other tasks.
Pillar 3: Context Engineering as Code — The most novel pillar. Defines four context sources (codebase, documentation, specifications, runtime data), three assembly strategies (task-driven, progressive disclosure, adaptive optimization), and a validation framework with quality gates for assembled context. Includes a 78-week implementation roadmap across five phases.
Pillar 4: Testing as Code — The most detailed pillar. Defines 15+ testing strategies: unit, integration, E2E, property-based, mutation, fuzz, security, load, chaos engineering. Each strategy has principles, practices, metrics, violation patterns, and auto-fix suggestions.
Pillar 5: Documentation as Code — Defines automated documentation generation, living documentation practices, quality measurement, and maintenance workflows.
Pillar 6: Coding Best Practices — Defines SOLID and DRY principles as enforceable rules, with language-specific implementations for Rust, C#, Python, and JavaScript.
The Nature of a Pillar
A pillar is a text file. It describes practices, defines metrics, lists strategies, and specifies quality thresholds. It does not compile. It does not generate code. It does not prevent violations. It is a document that an AI agent reads and a CI pipeline checks against.
The relationship between pillars is referential: Pillar 1 feeds into Pillar 2, which feeds into Pillar 3, which informs Pillars 4-6. But this feeding is conceptual, not mechanical. There is no import statement, no type reference, no compiler check that Pillar 2 is consistent with Pillar 1.
The Inertness Problem
This is the deepest architectural problem with the pillar approach: text files are inert. They contain information, but they cannot act on it. A .txt file that says acceptance_criteria: ["Admin can assign roles"] is a string. It doesn't know what "Admin" is. It doesn't know what "assign" means. It doesn't know what a "role" looks like.
To make that string useful, you need one of two things:
Option A: AI interpretation (hope). Feed the text to an AI agent and hope it interprets "Admin can assign roles" correctly. The AI might interpret "Admin" as a user with the Admin role, or as any user with the ManageRoles permission, or as a hardcoded user named "Admin." The text is ambiguous, and the AI must guess. Sometimes it guesses right. Sometimes it doesn't. This is what the spec-driven framework calls "context engineering" — it's sophisticated guessing with quality gates.
Option B: Structured parsing (reinventing a DSL). Build a parser that extracts acceptance_criteria from the text, validates the structure, and generates code from it. But the moment you build a parser with validation rules, you've invented a Domain-Specific Language. You've created a grammar (the text format), a parser (the extraction logic), validation rules (the structure checks), and a code generator (the downstream tooling). You've built everything a typed DSL provides — except without a compiler, without IDE integration, without refactoring support, and without type checking. You've built a worse version of the thing you were trying to avoid.
This is the fundamental paradox of "Specification as Code" implemented as text files: the word "Code" promises executable, verifiable specifications. But .txt files are not code. They're text that looks like code. To make them behave like code, you either rely on AI (probabilistic) or build tooling (which is just a compiler with extra steps).
The typed specification approach sidesteps this entirely. C# IS the specification language. The compiler IS the parser. Roslyn IS the validation framework. The IDE IS the navigation tool. There is no gap between "specification format" and "executable code" because they are the same thing.
Spec-driven journey:
Text → Parser → Validator → Code Generator → Executable
↑ you build all of this ↑ ↑ or you hope AI does it ↑
Typed specification journey:
C# → Compiler → Executable
↑ already exists ↑Spec-driven journey:
Text → Parser → Validator → Code Generator → Executable
↑ you build all of this ↑ ↑ or you hope AI does it ↑
Typed specification journey:
C# → Compiler → Executable
↑ already exists ↑The Six DSLs (Typed Specifications)
The typed specification approach organizes knowledge into six Domain-Specific Languages, each implemented as C# attributes processed by Roslyn source generators:
┌───────────────────────────────────────────────────────────────────┐
│ M3 Meta-Metamodel (self-describing) │
│ MetaConcept · MetaProperty · MetaReference · MetaConstraint │
│ MetaInherits │
└──────────┬────────────────────────────────────────────────────────┘
│ defines
▼
┌───────────────────────────────────────────────────────────────────┐
│ M2 DSLs (six domain-specific languages) │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ DDD DSL │ │ Content │ │ Admin │ │
│ │ │ │ DSL │ │ DSL │ │
│ │ Entities │ │ Parts │ │ Modules │ │
│ │ VOs │ │ Blocks │ │ Fields │ │
│ │ Aggregates│ │ Streams │ │ Filters │ │
│ │ CQRS │ │ │ │ Actions │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────────┐ │
│ │ Pages │ │ Workflow │ │ Requirements │ │
│ │ DSL │ │ DSL │ │ DSL │ │
│ │ │ │ │ │ │ │
│ │ Widgets │ │ Stages │ │ Epics │ │
│ │ Zones │ │ Transitions│ │ Features │ │
│ │ Layouts │ │ Guards │ │ Stories │ │
│ │ │ │ Locales │ │ Tests │ │
│ └──────────┘ └──────────┘ └──────────────┘ │
└──────────┬────────────────────────────────────────────────────────┘
│ generates
▼
┌───────────────────────────────────────────────────────────────────┐
│ Five-Stage Generation Pipeline │
│ │
│ Stage 0: Metamodel Registration → MetamodelRegistry.g.cs │
│ Stage 1: DSL Validation + Collection → Compiler diagnostics │
│ Stage 2: Core Generation → Entities, CQRS, EF Core │
│ Stage 3: Cross-Cutting → Admin UI, REST/GraphQL, workflows │
│ Stage 4: Traceability → Requirements matrix, compliance reports │
└───────────────────────────────────────────────────────────────────┘┌───────────────────────────────────────────────────────────────────┐
│ M3 Meta-Metamodel (self-describing) │
│ MetaConcept · MetaProperty · MetaReference · MetaConstraint │
│ MetaInherits │
└──────────┬────────────────────────────────────────────────────────┘
│ defines
▼
┌───────────────────────────────────────────────────────────────────┐
│ M2 DSLs (six domain-specific languages) │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ DDD DSL │ │ Content │ │ Admin │ │
│ │ │ │ DSL │ │ DSL │ │
│ │ Entities │ │ Parts │ │ Modules │ │
│ │ VOs │ │ Blocks │ │ Fields │ │
│ │ Aggregates│ │ Streams │ │ Filters │ │
│ │ CQRS │ │ │ │ Actions │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────────┐ │
│ │ Pages │ │ Workflow │ │ Requirements │ │
│ │ DSL │ │ DSL │ │ DSL │ │
│ │ │ │ │ │ │ │
│ │ Widgets │ │ Stages │ │ Epics │ │
│ │ Zones │ │ Transitions│ │ Features │ │
│ │ Layouts │ │ Guards │ │ Stories │ │
│ │ │ │ Locales │ │ Tests │ │
│ └──────────┘ └──────────┘ └──────────────┘ │
└──────────┬────────────────────────────────────────────────────────┘
│ generates
▼
┌───────────────────────────────────────────────────────────────────┐
│ Five-Stage Generation Pipeline │
│ │
│ Stage 0: Metamodel Registration → MetamodelRegistry.g.cs │
│ Stage 1: DSL Validation + Collection → Compiler diagnostics │
│ Stage 2: Core Generation → Entities, CQRS, EF Core │
│ Stage 3: Cross-Cutting → Admin UI, REST/GraphQL, workflows │
│ Stage 4: Traceability → Requirements matrix, compliance reports │
└───────────────────────────────────────────────────────────────────┘What Each DSL Contains
DSL 1: DDD — Attributes like [AggregateRoot], [ValueObject], [Composition], [Invariant]. The developer annotates domain classes; the source generator produces entity implementations, builders, EF Core configurations, CQRS handlers, repositories. From ~120 lines of attributes, it generates ~2,000 lines of correct, type-safe code.
DSL 2: Content — Attributes like [ContentPart], [ContentBlock], [StreamField]. Horizontal composition (parts) and vertical composition (blocks) for content management. Generated: content type definitions, editor components, storage configurations.
DSL 3: Admin — Attributes like [AdminModule], [AdminField], [AdminFilter], [AdminAction]. Generates complete Blazor CRUD UI from entity definitions. One attribute per field → full admin interface.
DSL 4: Pages — Attributes like [PageWidget], [WidgetConfig]. Compile-time widget definition with runtime page composition. Pages, layouts, zones are database entities; widgets are compiled types.
DSL 5: Workflow — Attributes like [Workflow], [Stage], [Transition], [RequiresRole], [ForEachLocale], [ScheduledTransition]. Generates complete state machine with guards, locale tracking, scheduled transitions, and audit trail.
DSL 6: Requirements — The feature tracking DSL. Features as abstract records, ACs as abstract methods, specifications as interfaces, implementations via [ForRequirement], tests via [Verifies]. Four analyzer families (REQ1xx-REQ4xx) enforce the chain at compile time.
The Nature of a DSL
A DSL is a compiler extension. It defines attributes that the source generator reads, validates, and uses to produce code. An invalid model doesn't compile. A missing implementation produces a compiler error. A missing test produces a compiler warning.
The relationship between DSLs is structural: all DSLs are registered in the M3 metamodel registry. The Requirements DSL references types defined by the DDD DSL (your domain entities are the inputs to your acceptance criteria). The Workflow DSL can apply to entities defined by the DDD DSL. The Admin DSL generates UI for entities across all DSLs.
These relationships are enforced by the type system. If the DDD DSL defines an Order entity and the Requirements DSL defines an OrderProcessingFeature with an AC that takes an OrderId, the compiler verifies that OrderId exists and has the expected shape.
How Knowledge is Organized
| Aspect | Spec-Driven (Pillars) | Typed Specifications (DSLs) |
|---|---|---|
| Unit of organization | Text file (~5,000 lines each) | C# attribute set + source generator |
| How units relate | Conceptual references ("see PRD section 3") | Type references (typeof(), nameof(), generic constraints) |
| Consistency check | Manual review or CI script | Compiler type-checking |
| Discovery | Read the document tree | IDE autocomplete + Ctrl+Click |
| Extension | Add a new text file section | Add a new [MetaConcept] attribute |
| Versioning | Git diff on text files | Git diff on C# files (same tooling, richer semantics) |
How Each Handles Boundaries
Spec-driven boundaries are sections in documents. The PRD has a "Functional Requirements" section and a "Non-Functional Requirements" section. The Testing spec has a "Unit Testing" section and an "Integration Testing" section. These boundaries are enforced by document structure — by human readers who know which section to look at.
Typed specification boundaries are project references and type constraints. The Requirements project cannot reference the Domain project. The Specifications project can only reference Requirements and SharedKernel. The Domain project can only reference Specifications. These boundaries are enforced by the compiler — a project reference violation is a build error.
// This is impossible — the Requirements project has no reference to Domain
public abstract record OrderFeature : Feature
{
// OrderService lives in the Domain project — compile error
public abstract AcceptanceCriterionResult Test(OrderService service);
}// This is impossible — the Requirements project has no reference to Domain
public abstract record OrderFeature : Feature
{
// OrderService lives in the Domain project — compile error
public abstract AcceptanceCriterionResult Test(OrderService service);
}How Each Evolves
Adding a new concern to spec-driven: Write a new .txt file. Add a section to the PRD template pointing to it. Update the context engineering specification to include the new document in context assembly. Update the AI agent prompt to mention the new document. This is four changes across four files, with no compiler check that you've done all four.
Adding a new concern to typed specifications: Create a new attribute class annotated with [MetaConcept]. Write a source generator that processes it. The M3 metamodel registry automatically includes it (Stage 0 handles this). All other DSLs can reference the new concept via type constraints. This is two changes (attribute + generator), and the metamodel self-registration ensures consistency.
The Pillar-DSL Mapping
The six pillars and six DSLs don't map 1:1, but there's a rough correspondence:
| Spec-Driven Pillar | Typed Specification DSL | Key Difference |
|---|---|---|
| Product Requirements Document | Requirements DSL | Template with placeholders vs abstract records with typed ACs |
| Specification as Code | DDD DSL + five-stage pipeline | Build stage descriptions vs compiler-executed generation pipeline |
| Context Engineering as Code | M3 Meta-Metamodel + MetamodelRegistry | Runtime context assembly vs compile-time metamodel registration |
| Testing as Code | REQ3xx analyzers + [Verifies] + [TestsFor] |
Strategy documents with metrics vs compiler-enforced test linkage |
| Documentation as Code | Source-generated traceability matrix + Ctrl+Click | Auto-generation from templates vs type system IS the documentation |
| Coding Best Practices | Roslyn Analyzers + [Invariant] + [Validated] |
Rules in text vs rules in the compiler |
The most significant mapping gap: Context Engineering as Code has no direct equivalent in typed specifications. Typed specifications don't need a context assembly strategy because the type system IS the context. When an AI agent writes code within the type system, it doesn't need a document telling it what the feature requirements are — the abstract methods on the feature record ARE the requirements.
Conversely: the M3 Meta-Metamodel has no equivalent in spec-driven. There's no mechanism for specs to describe themselves, for new specification types to self-register, or for the specification framework to validate its own consistency.
The Depth Question
One thing the spec-driven framework does very well: breadth of coverage. The Testing-as-Code specification alone covers unit testing, integration testing, E2E testing, property-based testing, mutation testing, fuzz testing, security testing, load testing, chaos engineering, test data management, CI/CD integration, parallelization, and reporting. Each with principles, practices, metrics, violation patterns, and auto-fix suggestions. It's a comprehensive encyclopedia of testing knowledge.
The typed specification approach covers far fewer testing strategies explicitly. It doesn't have a section on chaos engineering or protocol fuzzing. What it does is make the link between your specific requirements and your specific tests compiler-enforced. It doesn't tell you to write property-based tests — but if you write a test and annotate it with [Verifies], the compiler knows which AC it covers.
This is a genuine tradeoff:
- Spec-driven gives you a checklist: "Here are 15 testing strategies you should consider." That's valuable for teams that don't know what they don't know.
- Typed specifications give you enforcement: "Here are the ACs you haven't tested yet." That's valuable for teams that know what to do but struggle with coverage.
A team that needs guidance will benefit more from the spec-driven checklist. A team that needs enforcement will benefit more from the typed approach. Part V explores this in detail.
The Repository Structure
One final structural comparison. Here's what each approach looks like on disk:
Spec-driven:
ai-development-specifications/
├── README.md
├── LICENSE
├── Product-Requirements-Document-Template.txt
├── Specification-as-Code.txt
├── Context-Engineering-as-Code.txt
├── Testing-as-Code.txt
├── Documentation-as-Code.txt
└── Coding-Best-Practices-as-Code.txtai-development-specifications/
├── README.md
├── LICENSE
├── Product-Requirements-Document-Template.txt
├── Specification-as-Code.txt
├── Context-Engineering-as-Code.txt
├── Testing-as-Code.txt
├── Documentation-as-Code.txt
└── Coding-Best-Practices-as-Code.txtSeven text files. No code. No build. No tests. The specifications are the deliverable.
Typed specifications:
MyApp.sln
├── src/
│ ├── MyApp.Requirements/
│ │ ├── Epics/
│ │ │ └── PlatformScalabilityEpic.cs
│ │ ├── Features/
│ │ │ ├── UserRolesFeature.cs
│ │ │ ├── OrderProcessingFeature.cs
│ │ │ └── JwtAuthFeature.cs
│ │ ├── Stories/
│ │ │ ├── AssignRoleStory.cs
│ │ │ └── RevokeRoleStory.cs
│ │ └── Base/
│ │ ├── RequirementMetadata.cs
│ │ ├── AcceptanceCriterionResult.cs
│ │ └── Hierarchy.cs
│ ├── MyApp.SharedKernel/
│ │ ├── ValueTypes/
│ │ │ ├── UserId.cs
│ │ │ ├── RoleId.cs
│ │ │ └── Email.cs
│ │ └── Results/
│ │ └── Result.cs
│ ├── MyApp.Specifications/
│ │ ├── IUserRolesSpec.cs
│ │ ├── IOrderProcessingSpec.cs
│ │ └── IJwtAuthSpec.cs
│ ├── MyApp.Domain/
│ │ ├── AuthorizationService.cs
│ │ ├── OrderService.cs
│ │ └── JwtService.cs
│ └── MyApp.Api/
│ ├── Controllers/
│ ├── Startup.cs
│ └── Program.cs
├── test/
│ └── MyApp.Tests/
│ ├── UserRolesFeatureTests.cs
│ ├── OrderProcessingFeatureTests.cs
│ └── JwtAuthFeatureTests.cs
└── tools/
└── MyApp.Requirements.Analyzers/
├── RequirementCoverageAnalyzer.cs
├── SpecImplementationAnalyzer.cs
├── TestCoverageAnalyzer.cs
└── TraceabilityMatrixGenerator.csMyApp.sln
├── src/
│ ├── MyApp.Requirements/
│ │ ├── Epics/
│ │ │ └── PlatformScalabilityEpic.cs
│ │ ├── Features/
│ │ │ ├── UserRolesFeature.cs
│ │ │ ├── OrderProcessingFeature.cs
│ │ │ └── JwtAuthFeature.cs
│ │ ├── Stories/
│ │ │ ├── AssignRoleStory.cs
│ │ │ └── RevokeRoleStory.cs
│ │ └── Base/
│ │ ├── RequirementMetadata.cs
│ │ ├── AcceptanceCriterionResult.cs
│ │ └── Hierarchy.cs
│ ├── MyApp.SharedKernel/
│ │ ├── ValueTypes/
│ │ │ ├── UserId.cs
│ │ │ ├── RoleId.cs
│ │ │ └── Email.cs
│ │ └── Results/
│ │ └── Result.cs
│ ├── MyApp.Specifications/
│ │ ├── IUserRolesSpec.cs
│ │ ├── IOrderProcessingSpec.cs
│ │ └── IJwtAuthSpec.cs
│ ├── MyApp.Domain/
│ │ ├── AuthorizationService.cs
│ │ ├── OrderService.cs
│ │ └── JwtService.cs
│ └── MyApp.Api/
│ ├── Controllers/
│ ├── Startup.cs
│ └── Program.cs
├── test/
│ └── MyApp.Tests/
│ ├── UserRolesFeatureTests.cs
│ ├── OrderProcessingFeatureTests.cs
│ └── JwtAuthFeatureTests.cs
└── tools/
└── MyApp.Requirements.Analyzers/
├── RequirementCoverageAnalyzer.cs
├── SpecImplementationAnalyzer.cs
├── TestCoverageAnalyzer.cs
└── TraceabilityMatrixGenerator.csA full .NET solution with multiple projects, type-safe references between them, and custom analyzers. The specifications are compiled code.
Summary
| Dimension | Spec-Driven | Typed Specifications |
|---|---|---|
| Architecture | Document library | Compiler extension suite |
| Units | 6 text files | 6 DSLs + M3 meta-metamodel |
| Boundaries | Document sections | Project references + type constraints |
| Consistency | Human review | Compiler type-checking |
| Evolution | Add sections to documents | Add [MetaConcept] attributes |
| Breadth | Comprehensive (15+ testing strategies) | Focused (compiler-enforced chains) |
| Depth | Wide but shallow enforcement | Narrow but deep enforcement |
| Setup cost | Low (fill in templates) | High (build source generators, analyzers) |
| Ongoing cost | Document maintenance | Zero (types self-maintain) |
Part III dives into the most consequential difference: how each approach handles requirements — the beating heart of any specification system.
The Code Generation Question
Both approaches generate code. But the nature of the input, the generation mechanism, and the guarantees on the output are fundamentally different.
Spec-Driven Code Generation: Text In, Code Out
In the spec-driven approach, an AI agent reads a specification document and generates implementation code. The input is a text description; the output is code that hopefully matches.
Here's an Order aggregate defined in the PRD:
DEFINE_FEATURE(order_processing)
description: "Order management for e-commerce platform"
entities:
- name: "Order"
type: "AggregateRoot"
properties:
- name: "CustomerId"
type: "string"
required: true
- name: "Lines"
type: "list<OrderLine>"
required: true
- name: "Status"
type: "enum(Draft, Placed, Shipped, Delivered, Cancelled)"
- name: "ShippingAddress"
type: "Address"
required: true
invariants:
- "Order must have at least one line"
- "Total must be positive"
- "Cannot ship to invalid address"
commands:
- "PlaceOrder"
- "AddLine"
- "RemoveLine"
- "CancelOrder"DEFINE_FEATURE(order_processing)
description: "Order management for e-commerce platform"
entities:
- name: "Order"
type: "AggregateRoot"
properties:
- name: "CustomerId"
type: "string"
required: true
- name: "Lines"
type: "list<OrderLine>"
required: true
- name: "Status"
type: "enum(Draft, Placed, Shipped, Delivered, Cancelled)"
- name: "ShippingAddress"
type: "Address"
required: true
invariants:
- "Order must have at least one line"
- "Total must be positive"
- "Cannot ship to invalid address"
commands:
- "PlaceOrder"
- "AddLine"
- "RemoveLine"
- "CancelOrder"That's roughly 25 lines of text. The AI agent reads this and generates implementation code. The result might be correct. It might have subtle bugs. The developer reviews it and hopes to catch the gaps.
What does the AI generate? Something like this (abbreviated):
public class Order
{
public string CustomerId { get; set; }
public List<OrderLine> Lines { get; set; } = new();
public OrderStatus Status { get; set; }
public Address ShippingAddress { get; set; }
public void PlaceOrder() { /* AI's interpretation */ }
public void AddLine(OrderLine line) { /* AI's interpretation */ }
public void RemoveLine(int lineId) { /* AI's interpretation */ }
public void CancelOrder() { /* AI's interpretation */ }
}public class Order
{
public string CustomerId { get; set; }
public List<OrderLine> Lines { get; set; } = new();
public OrderStatus Status { get; set; }
public Address ShippingAddress { get; set; }
public void PlaceOrder() { /* AI's interpretation */ }
public void AddLine(OrderLine line) { /* AI's interpretation */ }
public void RemoveLine(int lineId) { /* AI's interpretation */ }
public void CancelOrder() { /* AI's interpretation */ }
}Notice: CustomerId is a string, not a strongly typed value object. Lines is a List<T> with a public setter. ShippingAddress is mutable. The invariants are comments, not enforced constraints. The commands have signatures that the AI guessed from the names. None of this is wrong per se — it's one valid interpretation. But it's one of many possible interpretations, and the spec doesn't constrain which one the AI picks.
Typed Code Generation: Attributes In, Guaranteed Code Out
In the typed approach, the same Order aggregate is defined with attributes:
[AggregateRoot]
public partial record Order
{
[Required]
public CustomerId CustomerId { get; init; }
[Composition(minItems: 1)]
public IReadOnlyList<OrderLine> Lines { get; init; } = Array.Empty<OrderLine>();
public OrderStatus Status { get; init; } = OrderStatus.Draft;
[Required]
public Address ShippingAddress { get; init; }
[Invariant]
private bool HasAtLeastOneLine() => Lines.Count > 0;
[Invariant]
private bool TotalIsPositive() => Lines.Sum(l => l.Total) > Money.Zero;
[Invariant]
private bool ShippingAddressIsValid() => ShippingAddress.IsValid();
}[AggregateRoot]
public partial record Order
{
[Required]
public CustomerId CustomerId { get; init; }
[Composition(minItems: 1)]
public IReadOnlyList<OrderLine> Lines { get; init; } = Array.Empty<OrderLine>();
public OrderStatus Status { get; init; } = OrderStatus.Draft;
[Required]
public Address ShippingAddress { get; init; }
[Invariant]
private bool HasAtLeastOneLine() => Lines.Count > 0;
[Invariant]
private bool TotalIsPositive() => Lines.Sum(l => l.Total) > Money.Zero;
[Invariant]
private bool ShippingAddressIsValid() => ShippingAddress.IsValid();
}That's roughly 20 lines of C#. The Roslyn source generator reads these attributes and produces deterministic output. There is no interpretation, no guessing, no "hope it's right." The generator produces exactly the code the attributes specify.
Here's what the source generator produces (this is the full output from ~20 lines of input):
// ========================================================================
// Auto-generated by DDD Source Generator — Stage 2
// Source: Order (AggregateRoot)
// DO NOT MODIFY — regenerated on every build
// ========================================================================
// --- Order.Builder.g.cs ---
public partial record Order
{
public sealed class Builder
{
private CustomerId _customerId;
private IReadOnlyList<OrderLine> _lines = Array.Empty<OrderLine>();
private OrderStatus _status = OrderStatus.Draft;
private Address _shippingAddress;
private bool _customerIdSet;
private bool _shippingAddressSet;
public Builder WithCustomerId(CustomerId value)
{
_customerId = value;
_customerIdSet = true;
return this;
}
public Builder WithLines(IReadOnlyList<OrderLine> value)
{
_lines = value ?? throw new ArgumentNullException(nameof(value));
return this;
}
public Builder AddLine(OrderLine line)
{
var list = new List<OrderLine>(_lines) { line };
_lines = list.AsReadOnly();
return this;
}
public Builder WithStatus(OrderStatus value)
{
_status = value;
return this;
}
public Builder WithShippingAddress(Address value)
{
_shippingAddress = value;
_shippingAddressSet = true;
return this;
}
public Result<Order, ValidationError> Build()
{
var errors = new List<ValidationError>();
if (!_customerIdSet)
errors.Add(new ValidationError(
nameof(CustomerId), "CustomerId is required"));
if (!_shippingAddressSet)
errors.Add(new ValidationError(
nameof(ShippingAddress), "ShippingAddress is required"));
if (errors.Count > 0)
return Result<Order, ValidationError>.Failure(errors);
var candidate = new Order
{
CustomerId = _customerId,
Lines = _lines,
Status = _status,
ShippingAddress = _shippingAddress
};
// Run invariants
if (!candidate.HasAtLeastOneLine())
errors.Add(new ValidationError(
"Invariant", "Order must have at least one line"));
if (!candidate.TotalIsPositive())
errors.Add(new ValidationError(
"Invariant", "Total must be positive"));
if (!candidate.ShippingAddressIsValid())
errors.Add(new ValidationError(
"Invariant", "Shipping address must be valid"));
if (errors.Count > 0)
return Result<Order, ValidationError>.Failure(errors);
return Result<Order, ValidationError>.Success(candidate);
}
}
public static Builder CreateBuilder() => new();
}
// --- Order.EfCoreConfiguration.g.cs ---
public sealed class OrderEntityTypeConfiguration
: IEntityTypeConfiguration<Order>
{
public void Configure(EntityTypeBuilder<Order> builder)
{
builder.ToTable("Orders");
builder.HasKey(e => e.Id);
builder.Property(e => e.CustomerId)
.HasConversion(
v => v.Value,
v => new CustomerId(v))
.IsRequired();
builder.OwnsMany(e => e.Lines, lineBuilder =>
{
lineBuilder.ToTable("OrderLines");
lineBuilder.WithOwner().HasForeignKey("OrderId");
lineBuilder.Property(l => l.ProductId)
.HasConversion(v => v.Value, v => new ProductId(v))
.IsRequired();
lineBuilder.Property(l => l.Quantity).IsRequired();
lineBuilder.Property(l => l.UnitPrice)
.HasConversion(v => v.Amount, v => Money.From(v))
.IsRequired();
});
builder.Property(e => e.Status)
.HasConversion<string>()
.IsRequired();
builder.OwnsOne(e => e.ShippingAddress, addressBuilder =>
{
addressBuilder.Property(a => a.Street).IsRequired();
addressBuilder.Property(a => a.City).IsRequired();
addressBuilder.Property(a => a.PostalCode).IsRequired();
addressBuilder.Property(a => a.Country).IsRequired();
});
}
}
// --- Order.Commands.g.cs ---
public sealed record PlaceOrderCommand(
CustomerId CustomerId,
IReadOnlyList<OrderLineDto> Lines,
Address ShippingAddress) : ICommand<Result<OrderId, DomainException>>;
public sealed record AddLineCommand(
OrderId OrderId,
ProductId ProductId,
int Quantity,
Money UnitPrice) : ICommand<Result<Unit, DomainException>>;
public sealed record RemoveLineCommand(
OrderId OrderId,
OrderLineId LineId) : ICommand<Result<Unit, DomainException>>;
public sealed record CancelOrderCommand(
OrderId OrderId,
string Reason) : ICommand<Result<Unit, DomainException>>;
// --- Order.CommandHandlers.g.cs ---
public sealed class PlaceOrderCommandHandler
: ICommandHandler<PlaceOrderCommand, Result<OrderId, DomainException>>
{
private readonly IOrderRepository _repository;
private readonly IUnitOfWork _unitOfWork;
public PlaceOrderCommandHandler(
IOrderRepository repository,
IUnitOfWork unitOfWork)
{
_repository = repository;
_unitOfWork = unitOfWork;
}
public async Task<Result<OrderId, DomainException>> Handle(
PlaceOrderCommand command,
CancellationToken ct)
{
var buildResult = Order.CreateBuilder()
.WithCustomerId(command.CustomerId)
.WithLines(command.Lines.Select(l => l.ToOrderLine()).ToList().AsReadOnly())
.WithShippingAddress(command.ShippingAddress)
.WithStatus(OrderStatus.Placed)
.Build();
if (!buildResult.IsSuccess)
return Result<OrderId, DomainException>.Failure(
new InvalidOrderException(buildResult.Errors));
var order = buildResult.Value;
await _repository.Add(order, ct);
await _unitOfWork.Commit(ct);
return Result<OrderId, DomainException>.Success(order.Id);
}
}
// ... (AddLineCommandHandler, RemoveLineCommandHandler, CancelOrderCommandHandler follow the same pattern)
// --- Order.Repository.g.cs ---
public interface IOrderRepository
{
Task<Order?> FindById(OrderId id, CancellationToken ct);
Task Add(Order order, CancellationToken ct);
Task Update(Order order, CancellationToken ct);
}
public sealed class OrderRepository : IOrderRepository
{
private readonly AppDbContext _context;
public OrderRepository(AppDbContext context) => _context = context;
public async Task<Order?> FindById(OrderId id, CancellationToken ct) =>
await _context.Orders
.Include(o => o.Lines)
.FirstOrDefaultAsync(o => o.Id == id, ct);
public async Task Add(Order order, CancellationToken ct) =>
await _context.Orders.AddAsync(order, ct);
public async Task Update(Order order, CancellationToken ct) =>
_context.Orders.Update(order);
}
// --- Order.DiRegistration.g.cs ---
public static class OrderDiRegistration
{
public static IServiceCollection AddOrderAggregate(
this IServiceCollection services)
{
services.AddScoped<IOrderRepository, OrderRepository>();
services.AddScoped<ICommandHandler<PlaceOrderCommand,
Result<OrderId, DomainException>>, PlaceOrderCommandHandler>();
services.AddScoped<ICommandHandler<AddLineCommand,
Result<Unit, DomainException>>, AddLineCommandHandler>();
services.AddScoped<ICommandHandler<RemoveLineCommand,
Result<Unit, DomainException>>, RemoveLineCommandHandler>();
services.AddScoped<ICommandHandler<CancelOrderCommand,
Result<Unit, DomainException>>, CancelOrderCommandHandler>();
return services;
}
}// ========================================================================
// Auto-generated by DDD Source Generator — Stage 2
// Source: Order (AggregateRoot)
// DO NOT MODIFY — regenerated on every build
// ========================================================================
// --- Order.Builder.g.cs ---
public partial record Order
{
public sealed class Builder
{
private CustomerId _customerId;
private IReadOnlyList<OrderLine> _lines = Array.Empty<OrderLine>();
private OrderStatus _status = OrderStatus.Draft;
private Address _shippingAddress;
private bool _customerIdSet;
private bool _shippingAddressSet;
public Builder WithCustomerId(CustomerId value)
{
_customerId = value;
_customerIdSet = true;
return this;
}
public Builder WithLines(IReadOnlyList<OrderLine> value)
{
_lines = value ?? throw new ArgumentNullException(nameof(value));
return this;
}
public Builder AddLine(OrderLine line)
{
var list = new List<OrderLine>(_lines) { line };
_lines = list.AsReadOnly();
return this;
}
public Builder WithStatus(OrderStatus value)
{
_status = value;
return this;
}
public Builder WithShippingAddress(Address value)
{
_shippingAddress = value;
_shippingAddressSet = true;
return this;
}
public Result<Order, ValidationError> Build()
{
var errors = new List<ValidationError>();
if (!_customerIdSet)
errors.Add(new ValidationError(
nameof(CustomerId), "CustomerId is required"));
if (!_shippingAddressSet)
errors.Add(new ValidationError(
nameof(ShippingAddress), "ShippingAddress is required"));
if (errors.Count > 0)
return Result<Order, ValidationError>.Failure(errors);
var candidate = new Order
{
CustomerId = _customerId,
Lines = _lines,
Status = _status,
ShippingAddress = _shippingAddress
};
// Run invariants
if (!candidate.HasAtLeastOneLine())
errors.Add(new ValidationError(
"Invariant", "Order must have at least one line"));
if (!candidate.TotalIsPositive())
errors.Add(new ValidationError(
"Invariant", "Total must be positive"));
if (!candidate.ShippingAddressIsValid())
errors.Add(new ValidationError(
"Invariant", "Shipping address must be valid"));
if (errors.Count > 0)
return Result<Order, ValidationError>.Failure(errors);
return Result<Order, ValidationError>.Success(candidate);
}
}
public static Builder CreateBuilder() => new();
}
// --- Order.EfCoreConfiguration.g.cs ---
public sealed class OrderEntityTypeConfiguration
: IEntityTypeConfiguration<Order>
{
public void Configure(EntityTypeBuilder<Order> builder)
{
builder.ToTable("Orders");
builder.HasKey(e => e.Id);
builder.Property(e => e.CustomerId)
.HasConversion(
v => v.Value,
v => new CustomerId(v))
.IsRequired();
builder.OwnsMany(e => e.Lines, lineBuilder =>
{
lineBuilder.ToTable("OrderLines");
lineBuilder.WithOwner().HasForeignKey("OrderId");
lineBuilder.Property(l => l.ProductId)
.HasConversion(v => v.Value, v => new ProductId(v))
.IsRequired();
lineBuilder.Property(l => l.Quantity).IsRequired();
lineBuilder.Property(l => l.UnitPrice)
.HasConversion(v => v.Amount, v => Money.From(v))
.IsRequired();
});
builder.Property(e => e.Status)
.HasConversion<string>()
.IsRequired();
builder.OwnsOne(e => e.ShippingAddress, addressBuilder =>
{
addressBuilder.Property(a => a.Street).IsRequired();
addressBuilder.Property(a => a.City).IsRequired();
addressBuilder.Property(a => a.PostalCode).IsRequired();
addressBuilder.Property(a => a.Country).IsRequired();
});
}
}
// --- Order.Commands.g.cs ---
public sealed record PlaceOrderCommand(
CustomerId CustomerId,
IReadOnlyList<OrderLineDto> Lines,
Address ShippingAddress) : ICommand<Result<OrderId, DomainException>>;
public sealed record AddLineCommand(
OrderId OrderId,
ProductId ProductId,
int Quantity,
Money UnitPrice) : ICommand<Result<Unit, DomainException>>;
public sealed record RemoveLineCommand(
OrderId OrderId,
OrderLineId LineId) : ICommand<Result<Unit, DomainException>>;
public sealed record CancelOrderCommand(
OrderId OrderId,
string Reason) : ICommand<Result<Unit, DomainException>>;
// --- Order.CommandHandlers.g.cs ---
public sealed class PlaceOrderCommandHandler
: ICommandHandler<PlaceOrderCommand, Result<OrderId, DomainException>>
{
private readonly IOrderRepository _repository;
private readonly IUnitOfWork _unitOfWork;
public PlaceOrderCommandHandler(
IOrderRepository repository,
IUnitOfWork unitOfWork)
{
_repository = repository;
_unitOfWork = unitOfWork;
}
public async Task<Result<OrderId, DomainException>> Handle(
PlaceOrderCommand command,
CancellationToken ct)
{
var buildResult = Order.CreateBuilder()
.WithCustomerId(command.CustomerId)
.WithLines(command.Lines.Select(l => l.ToOrderLine()).ToList().AsReadOnly())
.WithShippingAddress(command.ShippingAddress)
.WithStatus(OrderStatus.Placed)
.Build();
if (!buildResult.IsSuccess)
return Result<OrderId, DomainException>.Failure(
new InvalidOrderException(buildResult.Errors));
var order = buildResult.Value;
await _repository.Add(order, ct);
await _unitOfWork.Commit(ct);
return Result<OrderId, DomainException>.Success(order.Id);
}
}
// ... (AddLineCommandHandler, RemoveLineCommandHandler, CancelOrderCommandHandler follow the same pattern)
// --- Order.Repository.g.cs ---
public interface IOrderRepository
{
Task<Order?> FindById(OrderId id, CancellationToken ct);
Task Add(Order order, CancellationToken ct);
Task Update(Order order, CancellationToken ct);
}
public sealed class OrderRepository : IOrderRepository
{
private readonly AppDbContext _context;
public OrderRepository(AppDbContext context) => _context = context;
public async Task<Order?> FindById(OrderId id, CancellationToken ct) =>
await _context.Orders
.Include(o => o.Lines)
.FirstOrDefaultAsync(o => o.Id == id, ct);
public async Task Add(Order order, CancellationToken ct) =>
await _context.Orders.AddAsync(order, ct);
public async Task Update(Order order, CancellationToken ct) =>
_context.Orders.Update(order);
}
// --- Order.DiRegistration.g.cs ---
public static class OrderDiRegistration
{
public static IServiceCollection AddOrderAggregate(
this IServiceCollection services)
{
services.AddScoped<IOrderRepository, OrderRepository>();
services.AddScoped<ICommandHandler<PlaceOrderCommand,
Result<OrderId, DomainException>>, PlaceOrderCommandHandler>();
services.AddScoped<ICommandHandler<AddLineCommand,
Result<Unit, DomainException>>, AddLineCommandHandler>();
services.AddScoped<ICommandHandler<RemoveLineCommand,
Result<Unit, DomainException>>, RemoveLineCommandHandler>();
services.AddScoped<ICommandHandler<CancelOrderCommand,
Result<Unit, DomainException>>, CancelOrderCommandHandler>();
return services;
}
}That is approximately 230 lines of generated code from 20 lines of attributed C#. Every line is deterministic — the same input produces the same output, every time, on every machine. The builder enforces invariants. The EF Core configuration maps value objects correctly. The command handlers follow the CQRS pattern. The DI registration wires everything up.
The Comparison
| Dimension | Spec-Driven (AI generates) | Typed (Roslyn generates) |
|---|---|---|
| Input | ~25 lines of text | ~20 lines of attributed C# |
| Output | ~50 lines of code (AI's interpretation) | ~230 lines of code (deterministic) |
| Invariants | Comments or implementation-dependent | Compiled into builder validation |
| Value objects | AI might use string instead of CustomerId |
Guaranteed by attribute + type |
| EF Core mapping | AI must guess conventions | Generated from attribute metadata |
| CQRS pattern | AI must know the pattern | Generator encodes the pattern |
| DI registration | AI must know the container | Generator produces registration code |
| Consistency across team | Varies by AI prompt and model | Identical output for identical input |
| Reproducibility | Non-deterministic | Deterministic |
The spec-driven approach generates code that is probably right. The typed approach generates code that is provably right — because the generator is a compiler extension, and compilers don't guess.
The Extensibility Question
How does each approach handle adding new concepts to the specification system? This reveals the architectural flexibility of each design.
Spec-Driven Extensibility: Add a Section
To add a new concept — say, an "Audit" concern — to the spec-driven framework, you:
- Add an "Audit" section to the PRD template:
AUDIT_REQUIREMENTS:
audit_trail: true
audit_events:
- entity_created
- entity_modified
- entity_deleted
- permission_changed
retention_period: "7 years"
storage: "append-only log"
compliance: ["SOX", "HIPAA"]AUDIT_REQUIREMENTS:
audit_trail: true
audit_events:
- entity_created
- entity_modified
- entity_deleted
- permission_changed
retention_period: "7 years"
storage: "append-only log"
compliance: ["SOX", "HIPAA"]- Add an "Audit Testing" section to the Testing spec:
DEFINE_STRATEGY(audit_testing)
Category: Compliance
Principles:
- "Every auditable action produces exactly one audit event"
- "Audit events are immutable after creation"
- "Audit log must survive entity deletion"
Practices:
- Verify audit event creation for each auditable operation
- Verify audit event immutability
- Verify audit retention across deletion cascades
Metrics:
- audit_coverage: percentage of auditable operations with audit events
- audit_integrity: percentage of audit events that match source operationsDEFINE_STRATEGY(audit_testing)
Category: Compliance
Principles:
- "Every auditable action produces exactly one audit event"
- "Audit events are immutable after creation"
- "Audit log must survive entity deletion"
Practices:
- Verify audit event creation for each auditable operation
- Verify audit event immutability
- Verify audit retention across deletion cascades
Metrics:
- audit_coverage: percentage of auditable operations with audit events
- audit_integrity: percentage of audit events that match source operationsUpdate the Context Engineering spec to include audit documents in context assembly.
Update the Coding Practices spec with audit-specific patterns.
That's four changes across four files. The effort is moderate — mostly writing structured text. But there is no validation that you've made all four changes, that the sections reference each other consistently, or that the audit events in the PRD match the testing strategies in the Testing spec. A developer could add the PRD section and forget the testing section, and no tooling would catch the gap.
Typed Extensibility: Add a MetaConcept
To add the same "Audit" concept to the typed specification system, you:
Step 1: Define the new M2 concept attributes
namespace Cmf.Audit.Lib;
/// <summary>
/// Marks an entity as auditable. The source generator will produce
/// audit event types, an audit logger, and EF Core configuration
/// for the audit trail.
/// </summary>
[MetaConcept(
Name = "Auditable",
Category = MetaConceptCategory.CrossCutting,
Description = "Marks an entity for audit trail generation")]
[AttributeUsage(AttributeTargets.Class, AllowMultiple = false)]
public sealed class AuditableAttribute : Attribute
{
/// <summary>
/// Which operations generate audit events.
/// Default: all CRUD operations.
/// </summary>
public AuditOperations Operations { get; init; } = AuditOperations.All;
/// <summary>
/// Retention period for audit records.
/// </summary>
public string RetentionPeriod { get; init; } = "7 years";
/// <summary>
/// Compliance frameworks requiring this audit trail.
/// </summary>
public string[] ComplianceFrameworks { get; init; } = Array.Empty<string>();
}
[Flags]
public enum AuditOperations
{
None = 0,
Create = 1,
Read = 2,
Update = 4,
Delete = 8,
PermissionChange = 16,
All = Create | Read | Update | Delete | PermissionChange
}
/// <summary>
/// Marks a specific property as requiring field-level audit
/// (captures old value and new value on change).
/// </summary>
[MetaConcept(
Name = "AuditedField",
Category = MetaConceptCategory.CrossCutting,
Description = "Field-level audit tracking")]
[AttributeUsage(AttributeTargets.Property, AllowMultiple = false)]
public sealed class AuditedFieldAttribute : Attribute
{
public bool CaptureOldValue { get; init; } = true;
public bool CaptureNewValue { get; init; } = true;
public bool Sensitive { get; init; } = false;
}namespace Cmf.Audit.Lib;
/// <summary>
/// Marks an entity as auditable. The source generator will produce
/// audit event types, an audit logger, and EF Core configuration
/// for the audit trail.
/// </summary>
[MetaConcept(
Name = "Auditable",
Category = MetaConceptCategory.CrossCutting,
Description = "Marks an entity for audit trail generation")]
[AttributeUsage(AttributeTargets.Class, AllowMultiple = false)]
public sealed class AuditableAttribute : Attribute
{
/// <summary>
/// Which operations generate audit events.
/// Default: all CRUD operations.
/// </summary>
public AuditOperations Operations { get; init; } = AuditOperations.All;
/// <summary>
/// Retention period for audit records.
/// </summary>
public string RetentionPeriod { get; init; } = "7 years";
/// <summary>
/// Compliance frameworks requiring this audit trail.
/// </summary>
public string[] ComplianceFrameworks { get; init; } = Array.Empty<string>();
}
[Flags]
public enum AuditOperations
{
None = 0,
Create = 1,
Read = 2,
Update = 4,
Delete = 8,
PermissionChange = 16,
All = Create | Read | Update | Delete | PermissionChange
}
/// <summary>
/// Marks a specific property as requiring field-level audit
/// (captures old value and new value on change).
/// </summary>
[MetaConcept(
Name = "AuditedField",
Category = MetaConceptCategory.CrossCutting,
Description = "Field-level audit tracking")]
[AttributeUsage(AttributeTargets.Property, AllowMultiple = false)]
public sealed class AuditedFieldAttribute : Attribute
{
public bool CaptureOldValue { get; init; } = true;
public bool CaptureNewValue { get; init; } = true;
public bool Sensitive { get; init; } = false;
}Step 2: Write the source generator
namespace Cmf.Audit.Generators;
[Generator]
public sealed class AuditGenerator : IIncrementalGenerator
{
public void Initialize(IncrementalGeneratorInitializationContext context)
{
var auditableTypes = context.SyntaxProvider
.ForAttributeWithMetadataName(
"Cmf.Audit.Lib.AuditableAttribute",
predicate: static (node, _) => node is ClassDeclarationSyntax,
transform: static (ctx, _) => ExtractAuditMetadata(ctx));
context.RegisterSourceOutput(
auditableTypes,
static (spc, metadata) => GenerateAuditCode(spc, metadata));
}
private static void GenerateAuditCode(
SourceProductionContext context, AuditMetadata metadata)
{
// Generates:
// 1. {Entity}AuditEvent record (immutable, with timestamp, actor, operation)
// 2. {Entity}AuditLogger (logs to IAuditStore)
// 3. {Entity}AuditEventConfiguration (EF Core, append-only table)
// 4. Interceptor that auto-logs on SaveChanges
// 5. DI registration for audit infrastructure
}
}namespace Cmf.Audit.Generators;
[Generator]
public sealed class AuditGenerator : IIncrementalGenerator
{
public void Initialize(IncrementalGeneratorInitializationContext context)
{
var auditableTypes = context.SyntaxProvider
.ForAttributeWithMetadataName(
"Cmf.Audit.Lib.AuditableAttribute",
predicate: static (node, _) => node is ClassDeclarationSyntax,
transform: static (ctx, _) => ExtractAuditMetadata(ctx));
context.RegisterSourceOutput(
auditableTypes,
static (spc, metadata) => GenerateAuditCode(spc, metadata));
}
private static void GenerateAuditCode(
SourceProductionContext context, AuditMetadata metadata)
{
// Generates:
// 1. {Entity}AuditEvent record (immutable, with timestamp, actor, operation)
// 2. {Entity}AuditLogger (logs to IAuditStore)
// 3. {Entity}AuditEventConfiguration (EF Core, append-only table)
// 4. Interceptor that auto-logs on SaveChanges
// 5. DI registration for audit infrastructure
}
}Step 3: Use it
[AggregateRoot]
[Auditable(
Operations = AuditOperations.All,
RetentionPeriod = "7 years",
ComplianceFrameworks = new[] { "SOX", "HIPAA" })]
public partial record Order
{
[Required]
public CustomerId CustomerId { get; init; }
[Composition(minItems: 1)]
public IReadOnlyList<OrderLine> Lines { get; init; }
[AuditedField(Sensitive = false)]
public OrderStatus Status { get; init; }
[AuditedField(CaptureOldValue = true, CaptureNewValue = true)]
public Address ShippingAddress { get; init; }
}[AggregateRoot]
[Auditable(
Operations = AuditOperations.All,
RetentionPeriod = "7 years",
ComplianceFrameworks = new[] { "SOX", "HIPAA" })]
public partial record Order
{
[Required]
public CustomerId CustomerId { get; init; }
[Composition(minItems: 1)]
public IReadOnlyList<OrderLine> Lines { get; init; }
[AuditedField(Sensitive = false)]
public OrderStatus Status { get; init; }
[AuditedField(CaptureOldValue = true, CaptureNewValue = true)]
public Address ShippingAddress { get; init; }
}The moment this compiles, the M3 meta-metamodel registry auto-discovers the new [Auditable] and [AuditedField] concepts via the [MetaConcept] annotations:
// Auto-generated by Stage 0: MetamodelRegistry.g.cs
public static partial class MetamodelRegistry
{
static MetamodelRegistry()
{
// Existing DSL registrations
Register<AggregateRootAttribute>(MetaConceptCategory.DDD);
Register<ValueObjectAttribute>(MetaConceptCategory.DDD);
Register<ContentPartAttribute>(MetaConceptCategory.Content);
Register<AdminModuleAttribute>(MetaConceptCategory.Admin);
Register<WorkflowAttribute>(MetaConceptCategory.Workflow);
Register<FeatureAttribute>(MetaConceptCategory.Requirements);
// Auto-discovered: new Audit DSL concepts
Register<AuditableAttribute>(MetaConceptCategory.CrossCutting);
Register<AuditedFieldAttribute>(MetaConceptCategory.CrossCutting);
}
}// Auto-generated by Stage 0: MetamodelRegistry.g.cs
public static partial class MetamodelRegistry
{
static MetamodelRegistry()
{
// Existing DSL registrations
Register<AggregateRootAttribute>(MetaConceptCategory.DDD);
Register<ValueObjectAttribute>(MetaConceptCategory.DDD);
Register<ContentPartAttribute>(MetaConceptCategory.Content);
Register<AdminModuleAttribute>(MetaConceptCategory.Admin);
Register<WorkflowAttribute>(MetaConceptCategory.Workflow);
Register<FeatureAttribute>(MetaConceptCategory.Requirements);
// Auto-discovered: new Audit DSL concepts
Register<AuditableAttribute>(MetaConceptCategory.CrossCutting);
Register<AuditedFieldAttribute>(MetaConceptCategory.CrossCutting);
}
}No manual registration. No updating a separate document. No context assembly configuration. The [MetaConcept] attribute on the new attribute class is sufficient — the Stage 0 generator scans for all types annotated with [MetaConcept] and registers them automatically.
The Extensibility Comparison
| Aspect | Spec-Driven | Typed Specifications |
|---|---|---|
| Steps to add new concept | 4 (PRD + Testing + Context + Coding sections) | 2 (attribute library + source generator) |
| Cross-reference validation | None (manual consistency) | Automatic (M3 registry scans [MetaConcept]) |
| IDE discovery | Read the document | Autocomplete shows [Auditable] |
| Usage validation | None (text reference) | Compiler error if attribute used incorrectly |
| Generated output | None (text describes desired output) | Deterministic code from attributes |
| Other DSLs aware? | Only if they reference the new section | Yes — MetamodelRegistry includes all concepts |
| Rollback | Delete sections (hope you find all references) | Delete NuGet package (compiler shows all breakages) |
The extensibility story reveals the architectural difference at its sharpest. The spec-driven approach extends by adding text to documents — simple, but unvalidated. The typed approach extends by adding types to the compiler — more work upfront, but self-registering, self-validating, and self-documenting from the moment it compiles. (For a concrete proof of this extensibility, see Auto-Documentation from a Typed System, where a generic Document<T> DSL introspects any other DSL — adding documentation generation to the entire ecosystem without modifying any existing DSL.)