Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Ops.Testing -- Strategy Declarations as Types

"We should have load tested this." -- every post-mortem, every time.


The Problem

Production went down at 2 PM on a Thursday. The Order Service could not handle the flash sale traffic. The post-mortem had six people in a room and one question: "Did we load test this?"

The answer: "We have a testing strategy document."

The document existed. It was in Confluence, dated eighteen months ago, authored by someone who left the company. It said:

## OrderService Testing Strategy

- Unit tests: yes (target 80% coverage)
- Integration tests: yes (database + message broker)
- Load tests: planned for Q3
- Chaos tests: TBD
- Contract tests: would be nice

"Planned for Q3" was three quarters ago. Nobody noticed because the document was not connected to reality. There was no mechanism to verify that the planned tests actually existed, no build that failed when a test category was missing, no dashboard that showed the gap between strategy and execution.

This is a universal pattern. Teams write testing strategies. Those strategies describe what should exist. Then the strategies rot because there is no feedback loop between the document and the codebase.

The Testing DSL does not run tests. It does not replace xUnit, NUnit, or any test framework. It does one thing: it declares what tests should exist for a given target and validates that they do exist. The strategy is an attribute on the class. The source generator builds a matrix. The analyzer fires when reality diverges from the plan.


TestCategory Enum

[Flags]
public enum TestCategory
{
    None              = 0,
    Unit              = 1 << 0,
    Integration       = 1 << 1,
    E2E               = 1 << 2,
    Load              = 1 << 3,
    Chaos             = 1 << 4,
    Fuzz              = 1 << 5,
    Mutation          = 1 << 6,
    Contract          = 1 << 7,
    Smoke             = 1 << 8,
    Security          = 1 << 9,
    Accessibility     = 1 << 10,
    VisualRegression  = 1 << 11,
    PropertyBased     = 1 << 12,
    Snapshot          = 1 << 13,
    Approval          = 1 << 14
}

Fifteen categories. Flags, so they compose with |. A team does not need all of them. A team needs the ones they declare.

TestStrategy

[AttributeUsage(AttributeTargets.Class, AllowMultiple = false)]
public sealed class TestStrategyAttribute : Attribute
{
    public TestStrategyAttribute(Type target) { }

    /// <summary>
    /// Which test categories are required for this target.
    /// </summary>
    public TestCategory Required { get; init; }

    /// <summary>
    /// Which categories are recommended but not enforced.
    /// </summary>
    public TestCategory Recommended { get; init; }

    /// <summary>
    /// Owner team for accountability.
    /// </summary>
    public string? Owner { get; init; }

    /// <summary>
    /// When this strategy was last reviewed.
    /// </summary>
    public string? LastReviewedDate { get; init; }
}

This is the central attribute. It binds a test strategy to a target type. The Required categories generate analyzer errors when missing. The Recommended categories generate warnings.

FuzzTarget

[AttributeUsage(AttributeTargets.Class, AllowMultiple = true)]
public sealed class FuzzTargetAttribute : Attribute
{
    public FuzzTargetAttribute(Type targetType, string methodName) { }

    /// <summary>
    /// Fully qualified types of input generators (IFuzzInputGenerator).
    /// </summary>
    public Type[]? InputGenerators { get; init; }

    /// <summary>
    /// Number of iterations for the fuzz run.
    /// </summary>
    public int Iterations { get; init; } = 10_000;

    /// <summary>
    /// Maximum duration in seconds before the fuzz run is halted.
    /// </summary>
    public int TimeoutSeconds { get; init; } = 300;

    /// <summary>
    /// Seed for reproducibility.
    /// </summary>
    public int? Seed { get; init; }
}

Fuzz testing is a specific strategy that benefits from scaffolding. The attribute declares which method to fuzz, what generators to use, and how many iterations. The source generator emits a harness.

ContractTestConsumer

[AttributeUsage(AttributeTargets.Class, AllowMultiple = true)]
public sealed class ContractTestConsumerAttribute : Attribute
{
    public ContractTestConsumerAttribute(string consumer, string contractFile) { }

    /// <summary>
    /// How often the contract is verified against the provider.
    /// </summary>
    public ContractVerifySchedule VerifySchedule { get; init; }
        = ContractVerifySchedule.EveryBuild;

    /// <summary>
    /// The broker URL for Pact or similar contract tools.
    /// </summary>
    public string? BrokerUrl { get; init; }

    /// <summary>
    /// Whether to fail the build on contract mismatch.
    /// </summary>
    public bool FailOnMismatch { get; init; } = true;
}

public enum ContractVerifySchedule
{
    EveryBuild,
    Daily,
    Weekly,
    OnPullRequest
}

Pact-style consumer-driven contracts. The attribute declares who the consumer is, where the contract file lives, and how often verification runs. The analyzer checks that the contract file actually exists on disk.

MutationTestTarget

[AttributeUsage(AttributeTargets.Class, AllowMultiple = true)]
public sealed class MutationTestTargetAttribute : Attribute
{
    public MutationTestTargetAttribute(Type targetClass) { }

    /// <summary>
    /// Minimum mutation score (0–100). Build fails below this.
    /// </summary>
    public int MinMutationScore { get; init; } = 60;

    /// <summary>
    /// Mutators to apply. Null means all defaults.
    /// </summary>
    public string[]? Mutators { get; init; }

    /// <summary>
    /// Test projects that kill mutants for this target.
    /// </summary>
    public string[]? TestProjects { get; init; }
}

PropertyTestInvariant

[AttributeUsage(AttributeTargets.Class, AllowMultiple = true)]
public sealed class PropertyTestInvariantAttribute : Attribute
{
    public PropertyTestInvariantAttribute(Type targetClass, string invariantDescription) { }

    /// <summary>
    /// Number of random cases to generate.
    /// </summary>
    public int MaxCases { get; init; } = 100;

    /// <summary>
    /// Shrink failing inputs to minimal reproduction.
    /// </summary>
    public bool Shrink { get; init; } = true;
}

Property-based testing invariants. The attribute does not contain the property itself (that lives in the test project). It declares that a property invariant should exist for a target class and describes what it verifies.


Usage: The OrderService Test Strategy

[TestStrategy(
    typeof(OrderService),
    Required = TestCategory.Unit
             | TestCategory.Integration
             | TestCategory.Load
             | TestCategory.Chaos
             | TestCategory.Contract,
    Recommended = TestCategory.Mutation
                | TestCategory.PropertyBased
                | TestCategory.Fuzz,
    Owner = "order-team",
    LastReviewedDate = "2026-03-15")]

[FuzzTarget(
    typeof(OrderService), nameof(OrderService.PlaceOrder),
    InputGenerators = new[] { typeof(RandomOrderGenerator) },
    Iterations = 50_000,
    TimeoutSeconds = 600)]

[ContractTestConsumer(
    "PaymentService",
    "contracts/payment-service-order-service.json",
    VerifySchedule = ContractVerifySchedule.EveryBuild,
    FailOnMismatch = true)]

[ContractTestConsumer(
    "InventoryService",
    "contracts/inventory-service-order-service.json",
    VerifySchedule = ContractVerifySchedule.OnPullRequest)]

[MutationTestTarget(
    typeof(OrderPriceCalculator),
    MinMutationScore = 80,
    TestProjects = new[] { "OrderService.Tests.Unit" })]

[PropertyTestInvariant(
    typeof(OrderAggregate),
    "Order total always equals sum of line items minus discounts plus tax")]

[PropertyTestInvariant(
    typeof(OrderAggregate),
    "Cancelled order cannot transition to Shipped")]
public partial class OrderServiceTestPlan { }

One class. Seven attributes. The entire testing strategy for the OrderService aggregate is declared in code, version-controlled, and compiler-enforced.

Notice the partial keyword. The source generator extends this class with the generated matrix and discovery logic.


TestStrategyMatrix.g.cs

The source generator scans all [TestStrategy] attributes in the assembly and produces a typed registry:

// <auto-generated/>
namespace OrderService.Ops.Testing.Generated;

public static class TestStrategyMatrix
{
    public static readonly IReadOnlyDictionary<string, TestStrategyEntry> Entries
        = new Dictionary<string, TestStrategyEntry>
    {
        ["OrderService"] = new TestStrategyEntry
        {
            TargetType = typeof(OrderService),
            Required = TestCategory.Unit
                     | TestCategory.Integration
                     | TestCategory.Load
                     | TestCategory.Chaos
                     | TestCategory.Contract,
            Recommended = TestCategory.Mutation
                        | TestCategory.PropertyBased
                        | TestCategory.Fuzz,
            Owner = "order-team",
            LastReviewedDate = new DateOnly(2026, 3, 15),
            FuzzTargets = new[]
            {
                new FuzzTargetEntry
                {
                    TargetType = typeof(OrderService),
                    MethodName = "PlaceOrder",
                    Iterations = 50_000,
                    TimeoutSeconds = 600,
                    InputGenerators = new[] { typeof(RandomOrderGenerator) }
                }
            },
            ContractConsumers = new[]
            {
                new ContractConsumerEntry
                {
                    Consumer = "PaymentService",
                    ContractFile = "contracts/payment-service-order-service.json",
                    Schedule = ContractVerifySchedule.EveryBuild,
                    FailOnMismatch = true
                },
                new ContractConsumerEntry
                {
                    Consumer = "InventoryService",
                    ContractFile = "contracts/inventory-service-order-service.json",
                    Schedule = ContractVerifySchedule.OnPullRequest,
                    FailOnMismatch = true
                }
            },
            MutationTargets = new[]
            {
                new MutationTargetEntry
                {
                    TargetType = typeof(OrderPriceCalculator),
                    MinMutationScore = 80,
                    TestProjects = new[] { "OrderService.Tests.Unit" }
                }
            },
            PropertyInvariants = new[]
            {
                "Order total always equals sum of line items minus discounts plus tax",
                "Cancelled order cannot transition to Shipped"
            }
        }
    };

    /// <summary>
    /// Validates that discovered test classes cover the required categories.
    /// Called at build time by the analyzer; callable at runtime for dashboards.
    /// </summary>
    public static TestStrategyValidationResult Validate(
        IReadOnlyDictionary<string, TestCategory> discoveredTests)
    {
        var violations = new List<TestStrategyViolation>();

        foreach (var (target, entry) in Entries)
        {
            if (!discoveredTests.TryGetValue(target, out var covered))
                covered = TestCategory.None;

            var missing = entry.Required & ~covered;
            if (missing != TestCategory.None)
            {
                violations.Add(new TestStrategyViolation(
                    target, missing, Severity: DiagnosticSeverity.Error));
            }

            var missingRecommended = entry.Recommended & ~covered;
            if (missingRecommended != TestCategory.None)
            {
                violations.Add(new TestStrategyViolation(
                    target, missingRecommended, Severity: DiagnosticSeverity.Warning));
            }
        }

        return new TestStrategyValidationResult(violations);
    }
}

The matrix is a typed, queryable data structure. CI dashboards can read it. The analyzer calls Validate() during compilation. Runtime health checks can call it on startup.

TestCoverageReport.g.md

# Test Strategy Coverage Report
Generated: 2026-04-06T14:30:00Z

## OrderService

| Category       | Required | Status  | Test Class                        |
|----------------|----------|---------|-----------------------------------|
| Unit           | YES      | PASS    | OrderServiceUnitTests             |
| Integration    | YES      | PASS    | OrderServiceIntegrationTests      |
| Load           | YES      | MISSING | --                                |
| Chaos          | YES      | MISSING | --                                |
| Contract       | YES      | PASS    | OrderServiceContractTests         |
| Mutation       | no       | PASS    | OrderPriceCalculatorMutationTests |
| PropertyBased  | no       | MISSING | --                                |
| Fuzz           | no       | MISSING | --                                |

### Violations (2 errors, 2 warnings)
- ERROR: Load tests required but no test class with [LoadTest] found
- ERROR: Chaos tests required but no test class with [ChaosTest] found
- WARN: PropertyBased recommended but no property test found
- WARN: Fuzz recommended but no fuzz harness found

This file is generated into the build output. It is the single-page answer to "what tests do we have and what is missing?" No Confluence. No guessing.

fuzz-harness.g.cs

The source generator emits scaffolding for every [FuzzTarget]:

// <auto-generated/>
namespace OrderService.Ops.Testing.Generated;

/// <summary>
/// Fuzz harness for OrderService.PlaceOrder.
/// Generated from [FuzzTarget] attribute.
/// </summary>
public static class OrderService_PlaceOrder_FuzzHarness
{
    private static readonly RandomOrderGenerator _generator = new();

    public static FuzzResult Run(int? seedOverride = null)
    {
        var seed = seedOverride ?? Environment.TickCount;
        var rng = new Random(seed);
        var failures = new List<FuzzFailure>();
        var iterations = 50_000;
        var timeout = TimeSpan.FromSeconds(600);
        var sw = Stopwatch.StartNew();

        for (int i = 0; i < iterations && sw.Elapsed < timeout; i++)
        {
            var input = _generator.Generate(rng);
            try
            {
                var sut = CreateOrderService(); // from DI or factory
                sut.PlaceOrder(input);
            }
            catch (Exception ex)
            {
                failures.Add(new FuzzFailure(i, seed, input, ex));
            }
        }

        return new FuzzResult(
            TargetMethod: "OrderService.PlaceOrder",
            Iterations: iterations,
            Seed: seed,
            Failures: failures,
            Duration: sw.Elapsed);
    }
}

The harness is a starting point. Teams customize CreateOrderService() and the generator. The point is that the scaffolding exists automatically for every declared fuzz target, and the analyzer verifies the harness was not deleted.


TST001: Aggregate Without TestStrategy

error TST001: Type 'OrderAggregate' is marked with [DddAggregate] but has
no corresponding [TestStrategy]. Every aggregate root must have a declared
testing strategy.

This is the critical diagnostic. If the DDD DSL marks something as an aggregate, the Testing DSL requires a strategy declaration. No strategy means the build fails.

Trigger: any type with [DddAggregate], [DddEntity], or [OpsTarget] that lacks a [TestStrategy] referencing it.

TST002: LoadTestProfile Without Load Test Project

warning TST002: [TestStrategy] for 'OrderService' requires Load testing but
no project reference matching '*LoadTests*' or '*Performance*' exists in
the solution.

The analyzer scans solution project references. If the strategy says TestCategory.Load is required but no project with a load-testing naming convention exists, it flags the gap.

TST003: SecurityScanTarget Without Scan Results

error TST003: [TestStrategy] for 'PaymentService' requires Security testing
but no SARIF file matching 'PaymentService*.sarif' was found in the build
output.

For security test categories, the analyzer looks for SARIF (Static Analysis Results Interchange Format) output files. Missing SARIF files mean the security scans are declared but not executed.

TST004: ContractTestConsumer Without Pact File

error TST004: [ContractTestConsumer("PaymentService",
"contracts/payment-service-order-service.json")] references a contract file
that does not exist on disk.

The contract file path is checked at build time. If the file does not exist, the contract is declared but unverifiable. Build fails.


Testing to Requirements

The Requirements DSL (covered in the CMF series) defines features with acceptance criteria and a compile-time compliance chain. The Testing DSL extends this with operational coverage:

// Requirements DSL says: "Feature FEAT-456 must be implemented and tested"
// Testing DSL says: "Feature FEAT-456 must ALSO have load tests and chaos tests"

[TestStrategy(
    typeof(OrderService),
    Required = TestCategory.Unit | TestCategory.Integration
             | TestCategory.Load | TestCategory.Chaos)]

// The generator cross-references:
// - REQ4xx quality gates check [FeatureTest] exists
// - TST0xx quality gates check TestCategory coverage exists
// Together: every feature is implemented, unit-tested, AND operationally tested.

The Requirements DSL asks "is this feature built and unit-tested?" The Testing DSL asks "is this feature also load-tested, chaos-tested, and contract-tested?" They compose. Neither replaces the other.

Testing to Load Testing (Ops.LoadTesting)

The Load Testing sub-DSL (Part 9) defines load profiles: [LoadProfile(Rps, Duration, Ramp)]. The Testing DSL strategy references those profiles:

// Testing DSL declares load testing is required
[TestStrategy(typeof(OrderService), Required = TestCategory.Load)]

// Load Testing DSL defines the actual profile
[LoadProfile(typeof(OrderService), Rps = 5000, Duration = "10m",
    Ramp = "linear", WarmUp = "1m")]

If the Testing DSL requires TestCategory.Load but no [LoadProfile] exists for the target, analyzer TST002 fires. The load strategy is not just declared -- it is cross-validated against the load profile definition.

Testing to Chaos (Ops.Chaos)

Same pattern for chaos experiments:

// Testing DSL declares chaos testing is required
[TestStrategy(typeof(OrderService), Required = TestCategory.Chaos)]

// Chaos DSL defines the actual experiments
[ChaosExperiment(typeof(OrderService), "pod-kill",
    SteadyState = "p99 < 200ms", Blast = ChaosBlast.SinglePod)]

If the Testing DSL requires TestCategory.Chaos but no [ChaosExperiment] exists for the target, the analyzer flags the gap. The strategy is not a wish -- it is a constraint that the chaos infrastructure must satisfy.


Bridging Requirements and Operations

The Testing DSL sits at the boundary between two worlds. On one side: the Requirements DSL, which is a compile-time chain ensuring features are implemented and unit-tested. On the other side: the operational DSLs (Load, Chaos, Security, Contract), which define how systems behave under real-world conditions.

Without the Testing DSL, these two worlds do not talk to each other. A feature can be "done" according to the Requirements DSL (implemented, unit-tested, acceptance criteria passing) and still fail in production because nobody load-tested it. The Testing DSL closes that gap by declaring that "done" includes operational validation.

The declaration is cheap. Writing a [TestStrategy] attribute takes thirty seconds. The enforcement is what matters: the build will not pass until the declared tests exist. The Confluence document is replaced by a compiler diagnostic.

error TST001: Type 'OrderAggregate' has no TestStrategy.
error TST004: Contract file 'contracts/payment-service.json' not found.
warning TST002: Load tests required but no load test project found.

Three lines. More useful than a ten-page testing strategy document that nobody reads.

⬇ Download