Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Closing the Loop: From Manual Bindings to AST-Inferred Traceability

What if you never had to tell the system which source files a feature touches — it just knew, by reading your tests?

Previously: Requirements as Code in TypeScript

This series is a sequel to Requirements as Code in TypeScript. That post built a system where features are abstract classes, acceptance criteria are abstract methods, @Implements decorators link tests to features, and the compiler catches broken references via keyof T. It worked. Twenty features, 112 acceptance criteria, a regex-based compliance scanner that reported coverage gaps.

But the chain had a hole. The scanner could tell you which tests cover Feature X — that was mechanically verified via decorators. What it could not tell you was which source files does Feature X claim? That part was manual: 93 .bindings.ts files, each hand-written by a developer who decided "this AC verifies this symbol in this file." A first iteration also used sourceFiles[] arrays declared directly on each Feature class. Both drifted silently the moment someone refactored a test.

This series documents the refactor that closed every link in the chain.

What Changed

Metric Before After
Features 93 96
Acceptance criteria 794 818
Tests 2,191 2,642
Manual .bindings.ts files 93 0
Declared sourceFiles[] arrays 93 0 (removed from Feature class)
Empty @Verifies methods (no resolved symbols) 523 11
Sync IO violations 232 0
Line coverage ~43% 99.75%
Orphan source files 53 0
Runtime coverage warnings 17 0
Unbound features 4 0
Quality gate PASS PASS

The Closed Loop

The core insight: the test code is the single source of truth. Write @Verifies<Feature>('acName') in a test, import symbols from src/lib/, and an AST scanner does the rest — it walks the test body, follows local helpers transitively, resolves imports to repo-relative paths, and emits a BindingsManifest that tells you exactly which source files each acceptance criterion actually touches.

No manual declarations. No drift. No "conceptually true but factually false" bindings.

Diagram
Before AST inference — three links were declarative-only and drift-prone.
Diagram
After AST inference — every link is mechanically verified via the BindingsManifest.

Part I: The Closed Loop — How Tests Became the Source of Truth

The AST scanner that replaced 93 manual binding files. How parseTestFile walks test bodies, resolves imports transitively through local helpers and class members, and emits a BindingsManifest. Why a naive scanner missed 523 methods and a transitive walker missed only 11. The feature that tracks itself: TEST-BINDINGS-INF with 22 ACs verifying the scanner.

Part II: Bidirectional Queries — From Feature to File and Back

The BindingsManifest enables questions in both directions: "which files does ACCENT claim?" and "which features will break if I touch scroll-spy-machine.ts?" The work trace CLI tool with 8 sub-commands. The rename from @Implements to @Verifies. The safe migration via diffManifests.

Part III: The Manifest as an Architecture X-Ray

The BindingsManifest is a bipartite graph Feature-File. Query it to detect SRP violations (files claimed by 5+ features), measure feature coupling (shared files between features), compute cohesion and isolation scores, find encapsulation breaches, and assess module extraction feasibility. SOLID as a metric, not an opinion.

Part IV: Hexagonal Architecture and Coverage Hardening

How 232 sync IO violations were extracted into hexagonal ports (src/lib/external.ts) by six parallel AI agents, making every factory unit-testable with cheap fakes. A concrete before/after on ScrollSpyMachine. Coverage from 43% to 99.75%. The coverage-flush bug that SOLID resolved accidentally. The sync-usage scanner as a permanent ratchet.

Part V: AI-Driven Self-Implementation — When ACs Write Their Own Tests

The system is now complete enough that an AI agent can read a feature class, produce decorated tests, and the scanner validates completeness automatically. The self-implementation loop with two concrete examples (query functions and FSM lifecycle guards). Skills as reusable workflows. The manifest as a platform. What agents produce, what they cannot verify, and why human review remains the correctness oracle.

Part VI: Dev Tooling — The TUI That Ties It All Together

Two terminal dashboards that make the entire infrastructure practical. A command hub exposing every verb — serve, build, test, audit, git — in two keystrokes. A file watcher that accumulates changes, classifies them by type (markdown, image, config), and computes the minimal pipeline to execute. The feedback loop that ties Parts I–V into a zero-context-switch workflow.

Part VII: Closing the Gaps — Event Topology Beyond Drift Detection

The event topology scanner caught drift but had five blind spots: coordinator files invisible to the graph, factory composition edges, phantom delegation chains, feature-event cross-references, and lifecycle scope. The fix wasn't five new detectors — it was extracting coordination logic into decorated src/lib/ files the scanner already knew how to read. Plus: unifying regex and AST extraction paths, composition edges in a D3-ready JSON graph, and mutation testing with Stryker.

Prerequisites

This series assumes you have read Requirements as Code in TypeScript. It explains the decorator system (@FeatureTest, @Verifies, @Exclude), the Feature abstract class pattern, the compliance scanner, and the keyof T type safety. This series is about the delta — the breakthrough that closed the remaining open links.

For the C# origin of this approach, see Requirements as Code: A Type-Safe Chain. For a comparative analysis of spec-driven vs typed specifications, see the versus series.

⬇ Download