Typed Specifications vs. Everything Else
How Eight Approaches Link Requirements to Tests
Every team solves the same problem: how do you know which tests verify which features? Eight approaches. One comparison. Zero silver bullets.
The Universal Problem
You have features. You have tests. Somewhere between the two, there's a mapping — and in most projects, that mapping lives in someone's head.
The symptoms are familiar. A user reports that keyboard navigation in search results is broken. You check the test suite: 644 tests, all green. You grep for "search." You find 12 test files. Twenty minutes later, you still can't answer the question: which tests verify the search feature's acceptance criteria, and are all of them covered?
This is the Green Bar Illusion. Passing tests prove that something works. They don't prove that everything you care about is verified.
Every approach in this comparison attempts to close that gap. They differ in where they put the specification, how they link it to tests, what they catch when the link breaks, and how much discipline they require to maintain.
What Typed Specifications Are (In 30 Seconds)
For readers who haven't seen the full series: typed specifications make features into types and acceptance criteria into methods. A decorator chain links tests to features at compile time. A scanner cross-references the two and fails the build if the chain breaks.
// The feature IS a type
export abstract class NavigationFeature extends Feature {
readonly id = 'NAV';
abstract tocClickLoadsPage(): ACResult; // AC = abstract method
abstract backButtonRestores(): ACResult;
}
// The test links to the AC via keyof T
@FeatureTest(NavigationFeature)
class NavigationTests {
@Implements<NavigationFeature>('tocClickLoadsPage') // compiler-checked
async 'clicking TOC loads page'({ page }) { ... }
}// The feature IS a type
export abstract class NavigationFeature extends Feature {
readonly id = 'NAV';
abstract tocClickLoadsPage(): ACResult; // AC = abstract method
abstract backButtonRestores(): ACResult;
}
// The test links to the AC via keyof T
@FeatureTest(NavigationFeature)
class NavigationTests {
@Implements<NavigationFeature>('tocClickLoadsPage') // compiler-checked
async 'clicking TOC loads page'({ page }) { ... }
}The compiler catches typos ('typo' is not keyof NavigationFeature). The scanner catches gaps (AC with no test). The quality gate fails the build. The system tracks itself.
That's the baseline. Now let's see how it compares.
The Comparison Matrix
| Approach | Requirement Representation | Test Linking | Typo Detection | Rename Safety | Completeness Check | Build Gate | Drift Resistance |
|---|---|---|---|---|---|---|---|
| Jira / ADO / Linear | Tickets (strings in DB) | Manual: ticket ID in test name | Never | None | Manual query / plugin | Webhook possible | Low |
| BDD (Gherkin) | .feature files (plain text) |
Step defs match via regex | Runtime only | Partial (IDE plugin) | Runner warns on undefined | Yes (step failure) | Medium |
| Allure / TestRail / Zephyr | Test cases in external DB | Annotations with string IDs | Never | None | Dashboard reports | Plugin-based | Low |
| xUnit Traits / Jest tags | String tags on tests | [Trait("Feature","NAV")] |
Never | None | No (freeform) | Filter only | Low |
| Directory conventions | Folder names | test/navigation/*.spec.ts |
Never | None | No | No | Very low |
| Wiki / README matrices | Markdown tables | Human writes "Test X covers Y" | Never | None | Visual scan | No | Very low |
| OpenAPI / Contract testing | API schemas (YAML/JSON) | Contract tests validate schema | Runtime (mismatch) | Partial (schema $ref) | Schema diff tools | Yes (contract fail) | High for shape |
| C# Roslyn generators | Abstract records + attributes | nameof() + source generator |
Compile-time | Full IDE refactor | Generator emits diagnostics | Yes (build error) | High |
| Typed Specs (TS) | Abstract classes + decorators | @Implements<F>('ac') + keyof T |
Compile-time | Full IDE refactor | Compliance scanner | Yes (exit code 1) | High |
The rest of this series walks through each row.
Table of Contents
Part I: Jira, Azure DevOps, and Linear
Ticket IDs in test names. The one-way link problem. What happens when tickets move, split, or close. Why project management tools are essential for workflow but insufficient for requirement-test traceability.
Part II: BDD Frameworks — Cucumber, Gherkin, SpecFlow
The closest competitor. Given/When/Then in .feature files, step definitions via regex. What BDD gets right (cross-functional communication) and where the indirection layer breaks down (typos at runtime, drift between feature files and step definitions, no compile-time safety).
Part III: Test Management Platforms — Allure, TestRail, Zephyr
Dedicated platforms with dashboards, test plans, and historical trends. The two-sources-of-truth problem. Rich reporting versus single-source-of-truth tradeoffs.
Part IV: Test Framework Tagging — xUnit Traits, NUnit Categories, Jest Tags
Built-in string-based tagging mechanisms. Good for filtering. No canonical feature list, no completeness check, no rename safety.
Part V: Directory Conventions and Wiki Matrices
The lightest-weight approaches: folder structure mirrors features, or a human maintains a traceability table. Convention-based, discipline-dependent, first to drift.
Part VI: API Specification — OpenAPI, AsyncAPI, Contract Testing
Schema-first API development and contract testing. Excellent for API shape guarantees. Complementary to typed specs, not competing — they cover different layers.
Part VII: C# Roslyn Source Generators — The Same Philosophy, Different Mechanics
The C# sibling of typed specifications. Abstract records, nameof(), Roslyn source generators, IDE squiggles. Where the C# approach is more powerful and where the TypeScript approach is more accessible.
Part VIII: The Verdict — Where Typed Specs Sit
A positioning chart, honest limitations, a decision guide for choosing the right approach based on team size, language, and project complexity. The layering insight: these approaches are complementary, not mutually exclusive.
How to Read This Series
Developers evaluating approaches should scan the comparison matrix above, then read the sections for approaches they're currently using or considering.
Architects designing a quality strategy should read Parts II (BDD), VII (Roslyn), and VIII (Verdict) for the architectural tradeoffs and decision framework.
Teams already using Jira + BDD should read Parts I, II, and VIII to understand what typed specs add on top of — not instead of — their existing tools.
Anyone in a hurry should read the comparison matrix above and Part VIII (The Verdict).
Related Posts
This series references and builds on:
- Onboarding Typed Specifications — the seven-part implementation series
- Requirements as Code in TypeScript — the deep-dive implementation
- Requirements as Code in C# — the Roslyn-based precedent
- CMF Part IX: Requirements DSL — the enterprise C# variant
- Where Requirements Meet DDD — the DDD mapping
- Scaling Requirements as Code — beyond 20 features
- Quality to Its Finest — the multi-layer testing foundation