Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Physical Boundaries Are Not Architecture

Industrial Monorepo Series — Part 2 of 7 1. The Problem · 2. Physical vs Logical · 3. Requirements as Projects · 4. At Scale · 5. Migration · 6. ROI · 7. Inverted Deps

You can have 50 DLLs and zero architecture. A DLL boundary tells the linker where to cut. It tells the developer nothing about which business capability lives inside.


In Part 1 we met MegaCorp — the 50-project monorepo that grew organically over a decade. The new developer asked: "Which code implements order processing?" and got no answer. This post explains why.

The root cause is not spaghetti code, not missing documentation, not bad developers. The root cause is that the only boundaries in the monorepo are physical — DLL boundaries, project boundaries, solution folder boundaries. And physical boundaries carry no semantic information about what the code inside them is supposed to do.


What Is a Physical Boundary?

A physical boundary is a separation enforced by the build system, the file system, or the deployment pipeline. In .NET:

Physical Boundary What It Separates What It Enforces
.csproj project Source files into compilation units Compile-time visibility (internal vs public)
.dll assembly Compiled IL into loadable units Assembly-level access modifiers
.sln solution Projects into IDE grouping Nothing (solutions are IDE metadata)
Solution folder Projects into visual groups Nothing (folders are cosmetic)
NuGet package Assemblies into distributable units Version compatibility
Docker container Processes into isolated runtimes Network and filesystem isolation

Every one of these boundaries answers the question: where does this code physically live?

None of them answers: what business capability does this code implement?

A project named MegaCorp.OrderService might implement order processing. Or it might implement order processing, inventory reservation, payment capture, notification dispatch, and audit logging — because "order" was the first feature and everything else got added to the same project over time. The project name is a label, not a contract. The compiler doesn't check that MegaCorp.OrderService only contains code related to orders.

The Directory Structure Lie

Developers often organize code by folder within a project:

MegaCorp.Core/
├── Orders/
│   ├── OrderService.cs
│   ├── OrderValidator.cs
│   ├── OrderPricingEngine.cs
│   └── OrderEventHandler.cs
├── Payments/
│   ├── PaymentProcessor.cs
│   ├── PaymentValidator.cs
│   └── RefundService.cs
├── Inventory/
│   ├── StockChecker.cs
│   ├── ReservationService.cs
│   └── WarehouseAdapter.cs
├── Users/
│   ├── UserService.cs
│   ├── RoleManager.cs
│   └── AuthorizationService.cs
├── Notifications/
│   ├── NotificationSender.cs
│   ├── EmailTemplateEngine.cs
│   └── SmsGateway.cs
└── Common/
    ├── BaseService.cs
    ├── ServiceHelper.cs
    └── Extensions.cs

This looks organized. Five domain areas, each in its own folder. But folders enforce nothing:

  • OrderService.cs can import and call PaymentProcessor.cs directly. There is no access boundary.
  • StockChecker.cs can instantiate NotificationSender.cs. There is no dependency rule.
  • UserService.cs can reach into Orders/ and mutate order state. There is no invariant protection.
  • The Common/ folder is a gravity well — code migrates there when it doesn't fit anywhere else, and eventually everything depends on it.

Folders are a filing system for humans. The compiler sees a flat list of .cs files in a single project. Every class can reference every other class. There are no boundaries.

The Namespace Illusion

Namespaces create a hierarchy of names, not a hierarchy of access:

namespace MegaCorp.Core.Orders;

public class OrderService
{
    // Can freely reference anything in MegaCorp.Core.Payments,
    // MegaCorp.Core.Inventory, MegaCorp.Core.Users, etc.
    // The namespace provides no isolation whatsoever.
    private readonly PaymentProcessor _payments;     // from MegaCorp.Core.Payments
    private readonly StockChecker _inventory;         // from MegaCorp.Core.Inventory
    private readonly NotificationSender _notifier;    // from MegaCorp.Core.Notifications
    private readonly UserService _users;              // from MegaCorp.Core.Users
}

In C#, namespaces are purely organizational. They affect name resolution (which using statements you need), not accessibility. A class in MegaCorp.Core.Orders can freely instantiate, inherit from, or depend on any public class in MegaCorp.Core.Payments. The compiler doesn't care about namespace hierarchy as an access boundary.

Compare this to a real boundary: a <ProjectReference>. If MegaCorp.OrderService does NOT reference MegaCorp.PaymentGateway, then OrderService.cs literally cannot see PaymentProcessor.cs. The compiler rejects it. That's a physical boundary with teeth.

But even that boundary only says: "this project can see that project." It doesn't say: "this project implements the Order Processing feature" or "this class satisfies acceptance criterion AC-3 of the Order Processing specification."


The Two ServiceProvider Anti-Patterns — In Full

Part 1 introduced the two faces of the ServiceProvider god-object. This section dissects them with complete implementations, call chains, and sequence diagrams showing why they destroy any hope of architectural boundaries.

Anti-Pattern A: The Static ServiceLocator

Here is the complete implementation as it typically exists in an industrial monorepo — not a simplified teaching example, but the actual class with all the accreted methods that accumulate over years:

// MegaCorp.Infrastructure/ServiceLocator.cs
// Original author: unknown (git blame shows a developer who left in 2019)
// Last modified: 2024 (someone added the TryGetService method)
// Referenced by: 47 files across 12 projects

using Microsoft.Extensions.DependencyInjection;

namespace MegaCorp.Infrastructure;

/// <summary>
/// Global service locator. USE CONSTRUCTOR INJECTION INSTEAD.
/// (This comment has been here since 2020. Nobody follows it.)
/// </summary>
public static class ServiceLocator
{
    private static IServiceProvider _provider = null!;
    private static readonly object _lock = new();
    private static bool _initialized;

    /// <summary>
    /// Called once in Program.cs after the DI container is built.
    /// </summary>
    public static void Initialize(IServiceProvider provider)
    {
        lock (_lock)
        {
            _provider = provider ?? throw new ArgumentNullException(nameof(provider));
            _initialized = true;
        }
    }

    /// <summary>
    /// Get a required service. Throws if not registered.
    /// </summary>
    public static T GetService<T>() where T : notnull
    {
        EnsureInitialized();
        return _provider.GetRequiredService<T>();
    }

    /// <summary>
    /// Get a service or null if not registered.
    /// Added in 2021 to handle optional dependencies.
    /// </summary>
    public static T? TryGetService<T>() where T : class
    {
        EnsureInitialized();
        return _provider.GetService<T>();
    }

    /// <summary>
    /// Get a service by type. Used when the type isn't known at compile time.
    /// Added in 2022 for the plugin system that was never finished.
    /// </summary>
    public static object GetService(Type serviceType)
    {
        EnsureInitialized();
        return _provider.GetRequiredService(serviceType);
    }

    /// <summary>
    /// Get all implementations of an interface.
    /// Added in 2023 for the notification system refactor.
    /// </summary>
    public static IEnumerable<T> GetServices<T>() where T : notnull
    {
        EnsureInitialized();
        return _provider.GetServices<T>();
    }

    /// <summary>
    /// Create a scope for scoped services.
    /// Added in 2024 when someone realized background jobs
    /// were resolving scoped services from the root provider.
    /// </summary>
    public static IServiceScope CreateScope()
    {
        EnsureInitialized();
        return _provider.CreateScope();
    }

    /// <summary>
    /// Reset for testing. Added after test parallelism broke everything.
    /// </summary>
    internal static void Reset()
    {
        lock (_lock)
        {
            _provider = null!;
            _initialized = false;
        }
    }

    private static void EnsureInitialized()
    {
        if (!_initialized)
            throw new InvalidOperationException(
                "ServiceLocator not initialized. Call Initialize() in Program.cs first.");
    }
}

This class is 80 lines. It has been extended five times over six years. Each extension was a reasonable response to a real problem. The result is a static god-object that wraps the entire DI container and is accessible from anywhere in the solution — including projects that have no business knowing about each other's services.

Here's how it gets used in practice — a real OrderService that processes an order by reaching into 8 different subsystems through the ServiceLocator:

// MegaCorp.Core/Orders/OrderService.cs
namespace MegaCorp.Core.Orders;

public class OrderService
{
    // No constructor parameters. No visible dependencies.
    // A reader must scan every method body to discover what this class actually needs.

    public async Task<OrderResult> ProcessOrder(CreateOrderCommand command)
    {
        // Step 1: Validate the order
        var validator = ServiceLocator.GetService<IOrderValidator>();
        var validationResult = validator.Validate(command);
        if (!validationResult.IsValid)
            return OrderResult.Failed(validationResult.Errors);

        // Step 2: Check inventory
        var inventory = ServiceLocator.GetService<IInventoryChecker>();
        var stockResult = await inventory.CheckAvailability(
            command.Items.Select(i => new StockQuery(i.Sku, i.Quantity)));
        if (!stockResult.AllAvailable)
            return OrderResult.Failed("Insufficient stock");

        // Step 3: Calculate pricing
        var pricing = ServiceLocator.GetService<IPricingEngine>();
        var pricedOrder = pricing.Calculate(command, stockResult.Reservations);

        // Step 4: Reserve inventory
        var reservation = ServiceLocator.GetService<IInventoryReserver>();
        var reservationId = await reservation.Reserve(stockResult.Reservations);

        try
        {
            // Step 5: Capture payment
            var payment = ServiceLocator.GetService<IPaymentProcessor>();
            var paymentResult = await payment.Capture(new PaymentRequest
            {
                Amount = pricedOrder.Total,
                Currency = pricedOrder.Currency,
                CustomerId = command.CustomerId,
                PaymentMethodId = command.PaymentMethodId
            });

            if (!paymentResult.Success)
            {
                await reservation.Release(reservationId);
                return OrderResult.Failed($"Payment failed: {paymentResult.Error}");
            }

            // Step 6: Create the order record
            var repository = ServiceLocator.GetService<IOrderRepository>();
            var order = Order.Create(command, pricedOrder, paymentResult, reservationId);
            await repository.Save(order);

            // Step 7: Publish domain event
            var eventBus = ServiceLocator.GetService<IEventBus>();
            await eventBus.Publish(new OrderCreatedEvent
            {
                OrderId = order.Id,
                CustomerId = command.CustomerId,
                Total = pricedOrder.Total,
                Items = command.Items.Count
            });

            // Step 8: Send confirmation
            var notifications = ServiceLocator.GetService<INotificationSender>();
            await notifications.Send(new OrderConfirmationNotification
            {
                OrderId = order.Id,
                CustomerEmail = command.CustomerEmail,
                OrderSummary = pricedOrder.Summary
            });

            // Step 9: Audit log
            var audit = ServiceLocator.GetService<IAuditLogger>();
            audit.Log("OrderCreated", new { order.Id, command.CustomerId, pricedOrder.Total });

            return OrderResult.Success(order.Id);
        }
        catch
        {
            await reservation.Release(reservationId);
            throw;
        }
    }
}

Nine services resolved at runtime. Zero visible in the class signature. The compiler sees OrderService as a class with no dependencies. The reality is it depends on the entire application.

Here's what the runtime resolution looks like:

Diagram

Every dependency goes through the ServiceLocator. The ServiceLocator goes through the DI container. The DI container resolves the concrete implementation at runtime. At compile time, the compiler sees none of this. The dependency graph is invisible.

Anti-Pattern B: IServiceProvider as Constructor Parameter

The "refactored" version replaces the static class with constructor injection of IServiceProvider itself:

// MegaCorp.Core/Orders/OrderServiceV2.cs
namespace MegaCorp.Core.Orders;

public class OrderServiceV2
{
    private readonly IServiceProvider _serviceProvider;
    private readonly ILogger<OrderServiceV2> _logger;

    public OrderServiceV2(IServiceProvider serviceProvider, ILogger<OrderServiceV2> logger)
    {
        _serviceProvider = serviceProvider;
        _logger = logger;
    }

    public async Task<OrderResult> ProcessOrder(CreateOrderCommand command)
    {
        _logger.LogInformation("Processing order for customer {CustomerId}", command.CustomerId);

        // Same pattern, different syntax
        using var scope = _serviceProvider.CreateScope();
        var sp = scope.ServiceProvider;

        var validator = sp.GetRequiredService<IOrderValidator>();
        var validationResult = validator.Validate(command);
        if (!validationResult.IsValid)
            return OrderResult.Failed(validationResult.Errors);

        var inventory = sp.GetRequiredService<IInventoryChecker>();
        var stockResult = await inventory.CheckAvailability(
            command.Items.Select(i => new StockQuery(i.Sku, i.Quantity)));
        if (!stockResult.AllAvailable)
            return OrderResult.Failed("Insufficient stock");

        var pricing = sp.GetRequiredService<IPricingEngine>();
        var pricedOrder = pricing.Calculate(command, stockResult.Reservations);

        var reservation = sp.GetRequiredService<IInventoryReserver>();
        var reservationId = await reservation.Reserve(stockResult.Reservations);

        try
        {
            var payment = sp.GetRequiredService<IPaymentProcessor>();
            var paymentResult = await payment.Capture(new PaymentRequest
            {
                Amount = pricedOrder.Total,
                Currency = pricedOrder.Currency,
                CustomerId = command.CustomerId,
                PaymentMethodId = command.PaymentMethodId
            });

            if (!paymentResult.Success)
            {
                await reservation.Release(reservationId);
                return OrderResult.Failed($"Payment failed: {paymentResult.Error}");
            }

            var repository = sp.GetRequiredService<IOrderRepository>();
            var order = Order.Create(command, pricedOrder, paymentResult, reservationId);
            await repository.Save(order);

            var eventBus = sp.GetRequiredService<IEventBus>();
            await eventBus.Publish(new OrderCreatedEvent
            {
                OrderId = order.Id,
                CustomerId = command.CustomerId,
                Total = pricedOrder.Total,
                Items = command.Items.Count
            });

            var notifications = sp.GetRequiredService<INotificationSender>();
            await notifications.Send(new OrderConfirmationNotification
            {
                OrderId = order.Id,
                CustomerEmail = command.CustomerEmail,
                OrderSummary = pricedOrder.Summary
            });

            var audit = sp.GetRequiredService<IAuditLogger>();
            audit.Log("OrderCreated", new { order.Id, command.CustomerId, pricedOrder.Total });

            return OrderResult.Success(order.Id);
        }
        catch
        {
            await reservation.Release(reservationId);
            throw;
        }
    }
}

The constructor now says: OrderServiceV2(IServiceProvider, ILogger<OrderServiceV2>). This looks like proper DI. It is not. The IServiceProvider parameter is the entire DI container — the class can resolve any service in the application. The constructor signature is semantically equivalent to OrderServiceV2(Everything).

Diagram

Why Anti-Pattern B Is Worse

Anti-Pattern A (static ServiceLocator) is at least honestly bad. It's a static class. Everyone knows it's a code smell. Linters flag it. Code reviews catch it (sometimes).

Anti-Pattern B is dishonest. It passes code review because "it uses DI." Senior developers approve it because the constructor has parameters. Linters don't flag it because IServiceProvider is a valid dependency to inject. But the effect is identical: the class depends on everything, the compiler sees nothing, and the dependency graph is a runtime secret.

Aspect Anti-Pattern A (Static) Anti-Pattern B (Injected SP)
How dependencies are resolved ServiceLocator.GetService<T>() _sp.GetRequiredService<T>()
Visible in constructor No constructor params IServiceProvider (opaque)
Testability Must initialize static state Must build full DI container or mock IServiceProvider
Code review detection Easy — "ServiceLocator" is a red flag Hard — looks like normal DI
Linter detection Some linters flag static service locators No standard linter flags IServiceProvider injection
Compile-time dependency graph Invisible Invisible
Can resolve any service Yes Yes
Architectural enforcement None None
Semantics of constructor "I need nothing" "I need everything"

Both patterns have the same fundamental problem: the compile-time dependency graph is a lie. The compiler thinks OrderService depends on nothing (A) or depends on IServiceProvider (B). The reality is it depends on 9 specific services, each with their own transitive dependencies, spanning 5 different projects. This information is only discoverable by reading every line of every method body.


The Anatomy of a Physical-Only Monorepo

Let's map the actual dependency structure of MegaCorp's order processing flow — not the project references (physical), but the runtime service resolution (logical):

Diagram

The left side is what the compiler sees: 4 projects with clean <ProjectReference> edges. It looks layered. It looks structured.

The right side is what happens at runtime: OrderService resolves 8 services through the ServiceLocator, reaching into 6 different projects that have no compile-time reference to each other. The "layered" architecture is a fiction maintained by project references. The actual dependency graph is a star pattern centered on ServiceLocator.

The Numbers

In a typical industrial monorepo like MegaCorp, a quick audit reveals:

Metric Value What It Means
Total .csproj files 50 Physical separation
<ProjectReference> edges ~120 Compile-time dependency graph
ServiceLocator.GetService<> calls ~340 Hidden runtime dependencies
IServiceProvider.GetRequiredService<> calls ~180 Hidden runtime dependencies (v2)
Total hidden dependencies ~520 4.3x the visible graph
Classes with ServiceLocator usage 47 47 classes with invisible deps
Classes with IServiceProvider injection 63 63 classes pretending to use DI
Average hidden deps per class 4.7 Each class hides ~5 real dependencies
Projects reachable via hidden deps but not via ProjectRef 8 8 phantom cross-project dependencies

The real dependency graph is 4.3 times larger than the compile-time graph. Eight projects that appear disconnected in the .sln are actually deeply coupled at runtime through ServiceLocator calls. The compiler thinks the architecture is layered. The runtime knows it's a mesh.

The DI Registration: Where the Truth Hides

If you want to understand the real architecture of an industrial monorepo, don't read the project references. Read the DI registration. In MegaCorp, this is Program.cs — and it's a 500-line confession:

// MegaCorp.Web/Program.cs — abridged, but representative
var builder = WebApplication.CreateBuilder(args);

// === Section 1: Core services (MegaCorp.Core) ===
builder.Services.AddScoped<IOrderService, OrderServiceV2>();
// TODO: Switch back to OrderService when pricing bug is fixed (2022-03-15)
builder.Services.AddScoped<IOrderValidator, OrderValidator>();
builder.Services.AddScoped<IPricingEngine, PricingEngineV3>();
// V1 deprecated 2020, V2 deprecated 2022, V3 current
builder.Services.AddScoped<IDiscountCalculator, DiscountCalculator>();
builder.Services.AddScoped<ICouponEngine, CouponEngine>();
builder.Services.AddScoped<IBulkDiscountCalculator, BulkDiscountCalculator>();
builder.Services.AddScoped<IOrderEventHandler, OrderEventHandler>();
// builder.Services.AddScoped<IOrderProcessor, OrderProcessor>(); // disabled 2023-09
builder.Services.AddScoped<IUserService, UserService>();
builder.Services.AddScoped<IRoleManager, RoleManager>();
builder.Services.AddScoped<IAuthorizationService, AuthorizationService>();
builder.Services.AddTransient<IPasswordHasher, BCryptPasswordHasher>();

// === Section 2: Data layer (MegaCorp.Data) ===
builder.Services.AddScoped<IOrderRepository, EfOrderRepository>();
builder.Services.AddScoped<IUserRepository, EfUserRepository>();
builder.Services.AddScoped<IInventoryRepository, EfInventoryRepository>();
builder.Services.AddScoped<IPaymentRepository, EfPaymentRepository>();
builder.Services.AddScoped<IAuditRepository, EfAuditRepository>();
builder.Services.AddScoped<INotificationRepository, EfNotificationRepository>();
builder.Services.AddScoped(typeof(IRepository<>), typeof(EfRepository<>));
builder.Services.AddDbContext<MegaCorpDbContext>(options =>
    options.UseSqlServer(builder.Configuration.GetConnectionString("Default")));

// === Section 3: External services ===
builder.Services.AddScoped<IPaymentProcessor, StripePaymentProcessor>();
// builder.Services.AddScoped<IPaymentProcessor, BraintreePaymentProcessor>();
// Switched from Braintree to Stripe in 2023. Braintree code still in repo.
builder.Services.AddScoped<IInventoryChecker, InventoryServiceClient>();
builder.Services.AddScoped<IInventoryReserver, InventoryServiceClient>();
builder.Services.AddScoped<INotificationSender, MultiChannelNotificationSender>();
builder.Services.AddScoped<IEmailSender, SendGridEmailSender>();
builder.Services.AddScoped<ISmsSender, TwilioSmsSender>();
builder.Services.AddScoped<IPushNotificationSender, FirebasePushSender>();

// === Section 4: Infrastructure ===
builder.Services.AddScoped<IEventBus, RabbitMqEventBus>();
builder.Services.AddScoped<ICacheService, RedisCacheService>();
builder.Services.AddScoped<IAuditLogger, DatabaseAuditLogger>();
builder.Services.AddScoped<ISearchService, ElasticsearchService>();
builder.Services.AddSingleton<IFeatureFlagService, LaunchDarklyFeatureFlagService>();

// === Section 5: Background services ===
builder.Services.AddHostedService<OrderFulfillmentWorker>();
builder.Services.AddHostedService<InventorySyncWorker>();
builder.Services.AddHostedService<NotificationRetryWorker>();
builder.Services.AddHostedService<AuditArchiveWorker>();

// === Section 6: The ServiceLocator initialization ===
var app = builder.Build();
ServiceLocator.Initialize(app.Services);
// ^ This one line makes the entire DI container
// accessible from anywhere in the solution.
// Every ServiceLocator.GetService<T>() call
// bypasses the compile-time dependency graph.

This 500-line file is the actual architecture of MegaCorp. Not the project references. Not the solution folders. Not the class diagram on the wiki. This file defines what implements what, which version is active, which alternatives exist, and how everything connects. It's a runtime routing table — and it's the only source of truth.

The problem: this file is checked by the runtime (if a registration is missing, you get a System.InvalidOperationException in production), not by the compiler (there's no compile-time guarantee that all required registrations exist). You can delete a registration, the solution compiles fine, and the application crashes at the first request that hits the missing service.

The Configuration Sprawl

The DI registration is just the beginning. Industrial monorepos accumulate configuration in multiple places:

MegaCorp.Web/
├── Program.cs500 lines of DI registration
├── appsettings.json300 lines of config
├── appsettings.Development.json200 lines overriding production
├── appsettings.Staging.json180 lines (subtly different from Dev)
├── appsettings.Production.json250 lines (the "real" config)
├── Startup/
│   ├── AuthConfiguration.cs100 lines configuring JWT + OAuth
│   ├── CorsConfiguration.cs50 lines configuring CORS
│   ├── SwaggerConfiguration.cs80 lines configuring OpenAPI
│   ├── HealthCheckConfiguration.cs70 lines configuring health checks
│   ├── CachingConfiguration.cs60 lines configuring Redis
│   └── LoggingConfiguration.cs90 lines configuring Serilog
└── Middleware/
    ├── ErrorHandlingMiddleware.cs80 lines
    ├── AuditMiddleware.cs60 lines
    ├── CorrelationIdMiddleware.cs40 lines
    └── FeatureFlagMiddleware.cs50 lines

None of this configuration is linked to business features. The error handling middleware catches exceptions from all features. The audit middleware logs all operations. The health checks verify infrastructure, not business capability. There's no way to ask: "What configuration is needed for Order Processing?" because the configuration is organized by technical concern, not by feature.

What a Roslyn Analyzer Sees

If you run a static analysis tool on MegaCorp, here's what it can tell you:

Project dependency analysis:
  MegaCorp.Web → MegaCorp.Core → MegaCorp.Data → MegaCorp.Infrastructure
  MegaCorp.Web → MegaCorp.Infrastructure
  MegaCorp.Worker → MegaCorp.Core → MegaCorp.Data → MegaCorp.Infrastructure
  ... (120 edges total)

Unused code detection:
  WARNING: MegaCorp.Core/Orders/LegacyOrderHandler.cs — no compile-time references
  WARNING: MegaCorp.Core/Orders/OrderProcessor.cs — no compile-time references
  WARNING: MegaCorp.Contracts/V2/OrderDtoV2.cs — no compile-time references
  ... (but are these truly unused, or referenced via ServiceLocator/reflection?)

Cyclomatic complexity:
  HIGH: MegaCorp.Core/Orders/OrderService.cs — ProcessOrder method (42)
  HIGH: MegaCorp.Web/Program.cs — ConfigureServices method (38)
  ...

Code duplication:
  DUPLICATE: OrderService.ProcessOrder and OrderServiceV2.ProcessOrder (87% similar)
  DUPLICATE: OrderValidator.Validate and OrderValidatorNew.Validate (72% similar)
  ...

And here's what it cannot tell you:

Feature analysis:
  ??? Which features exist in this codebase?
  ??? Which code implements "Order Processing"?
  ??? Which acceptance criteria are tested?
  ??? Which features have no implementation?
  ??? Which features have no tests?
  ??? What percentage of acceptance criteria are covered?

The analyzer can see physical structure (projects, references, classes, methods). It cannot see logical structure (features, acceptance criteria, requirement-to-code links) because that information doesn't exist in the codebase. It's in Jira. It's in people's heads. It's in the comment on MEGA-4521. It's not in the type system.


The Build System's Perspective

From MSBuild's point of view, the 50-project monorepo is a directed acyclic graph of compilation units. Let's look at what the build system knows and doesn't know:

What MSBuild Knows

<!-- MegaCorp.OrderService/MegaCorp.OrderService.csproj -->
<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>net8.0</TargetFramework>
    <RootNamespace>MegaCorp.OrderService</RootNamespace>
  </PropertyGroup>

  <ItemGroup>
    <ProjectReference Include="..\MegaCorp.Core\MegaCorp.Core.csproj" />
    <ProjectReference Include="..\MegaCorp.Data\MegaCorp.Data.csproj" />
    <ProjectReference Include="..\MegaCorp.Contracts\MegaCorp.Contracts.csproj" />
    <ProjectReference Include="..\MegaCorp.Infrastructure\MegaCorp.Infrastructure.csproj" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="8.0.0" />
    <PackageReference Include="Microsoft.Extensions.Logging" Version="8.0.0" />
    <PackageReference Include="Polly" Version="8.2.0" />
  </ItemGroup>
</Project>

MSBuild knows:

  • This project depends on 4 other projects
  • It uses 3 NuGet packages
  • It targets .NET 8.0
  • It produces MegaCorp.OrderService.dll

MSBuild does NOT know:

  • This project implements the "Order Processing" feature
  • This project should implement 6 acceptance criteria
  • This project is owned by the "order-team"
  • This project has 3 untested acceptance criteria
  • Changing MegaCorp.Core/Orders/PricingEngine.cs affects this project's behavior (through ServiceLocator, invisible to build)

Build Order vs Feature Order

MSBuild compiles projects in dependency order:

Build order (MSBuild):
  1. MegaCorp.Infrastructure       (no project deps)
  2. MegaCorp.Common.Utils         (no project deps)
  3. MegaCorp.Contracts            (→ Infrastructure, Utils)
  4. MegaCorp.Core.Abstractions    (→ Contracts, Utils)
  5. MegaCorp.Data                 (→ Core.Abstractions, Contracts, Utils)
  6. MegaCorp.Core                 (→ Core.Abstractions, Data, Contracts, Utils, EventBus)
  7. MegaCorp.EventBus             (→ Contracts, Utils)
  8. MegaCorp.PaymentGateway       (→ Core, Contracts, EventBus, Utils)
  9. ... (42 more projects)

This order is determined by the <ProjectReference> DAG. It has nothing to do with business features. "Order Processing" doesn't have a build order because "Order Processing" doesn't exist as a build concept. The code that implements order processing is scattered across projects at build levels 5, 6, 7, 8, and beyond.

Incremental Builds — The Hidden Cost

MSBuild supports incremental builds: if a project hasn't changed, it's not recompiled. But incremental builds follow the <ProjectReference> graph, not the feature graph:

Developer changes: MegaCorp.Core/Orders/PricingEngine.cs

MSBuild incremental rebuild:
  ✓ MegaCorp.Core                  ← changed
  ✓ MegaCorp.Web                   ← depends on Core
  ✓ MegaCorp.Worker                ← depends on Core
  ✓ MegaCorp.PaymentGateway        ← depends on Core
  ✓ MegaCorp.InventoryService      ← depends on Core
  ✓ MegaCorp.NotificationService   ← depends on Core
  ✓ MegaCorp.BillingService        ← depends on Core (via Payment)
  ✓ MegaCorp.ReportingService      ← depends on Core
  ✓ MegaCorp.AuditService          ← depends on Core
  ✓ MegaCorp.SearchService         ← depends on Core
  ✓ MegaCorp.UserService           ← depends on Core
  = 11 projects rebuilt

What actually changed from a feature perspective:
  ✓ Order Processing pricing logic  ← PricingEngine is part of this feature
  ✗ User Management                 ← not affected
  ✗ Inventory Management            ← not affected (stock checking is separate)
  ✗ Reporting                       ← not affected
  = 1 feature affected

The build system rebuilds 11 projects because MegaCorp.Core is the gravity well — everything depends on it. But only 1 feature is actually affected. The 10 extra rebuilds are wasted work caused by the physical boundary structure. If the pricing engine lived in a feature-scoped project (say, MegaCorp.OrderProcessing), only that project and its direct dependents would rebuild.

This is another cost of physical-only boundaries: build time scales with the physical dependency graph, not the logical change scope. A single-feature change triggers a solution-wide rebuild because the physical boundaries don't align with feature boundaries.

What If the Build System Knew About Features?

Imagine MSBuild could say:

Developer changes: MegaCorp.Core/Orders/PricingEngine.cs

Feature-aware incremental rebuild:
  Feature affected: OrderProcessingFeature
  Projects implementing OrderProcessingFeature:
    ✓ MegaCorp.OrderService          ← : IOrderProcessingSpec
    ✓ MegaCorp.PaymentGateway        ← : IPaymentIntegrationSpec (for OrderProcessing)
  Tests for OrderProcessingFeature:
    ✓ MegaCorp.OrderService.Tests    ← [TestsFor(typeof(OrderProcessingFeature))]
  = 3 projects rebuilt (not 11)

Acceptance criteria affected:
    ✓ AC-1: OrderTotalMustBePositive  ← PricingEngine calculates totals
    ✗ AC-2: InventoryReservedBeforePayment  ← not affected
    ...

This is what logical boundaries enable. The build system doesn't just know "what depends on what" — it knows "what feature changed, what code implements it, and what tests verify it." The rebuild is scoped. The test run is scoped. The code review is scoped.

This isn't hypothetical — it's what the Roslyn analyzer in the Requirements as Code architecture provides. The [ForRequirement] attributes and the TraceabilityMatrix.g.cs give the build system exactly this information.


What Physical Boundaries Actually Enforce

Let's be precise about what physical boundaries do and do not enforce:

What a .csproj Boundary DOES Enforce

1. Compilation isolation:

// In MegaCorp.OrderService project
// This project does NOT have <ProjectReference> to MegaCorp.PaymentGateway.

using MegaCorp.PaymentGateway; // ← COMPILE ERROR: CS0246
// The type or namespace 'PaymentGateway' does not exist
// in the namespace 'MegaCorp'

If you don't add the <ProjectReference>, the code cannot see the other project's types. This is real enforcement.

2. Internal access modifier:

// In MegaCorp.Core
internal class PricingEngineHelper  // Only visible within MegaCorp.Core
{
    internal decimal ApplyDiscount(decimal total, decimal discount) { ... }
}

// In MegaCorp.Web — even with <ProjectReference> to MegaCorp.Core
var helper = new PricingEngineHelper(); // ← COMPILE ERROR
// 'PricingEngineHelper' is inaccessible due to its protection level

The internal keyword limits visibility to the assembly. This is useful for encapsulation.

3. Build parallelism:

The build system can compile independent projects in parallel. More projects = faster builds (up to a point). This is a tooling concern, not an architecture concern.

What a .csproj Boundary Does NOT Enforce

1. Feature ownership:

There is no way to declare "this project implements the Order Processing feature." The project is a container of source files. It has no semantic metadata about business capabilities.

2. Specification contracts:

A <ProjectReference> says "I can see your public types." It does not say "I implement your interface contract" or "I satisfy these acceptance criteria." The reference is a visibility grant, not a behavioral promise.

3. Dependency direction:

Project references form a DAG (directed acyclic graph), but the DAG direction is about compilation order, not architectural layering. Nothing prevents an "infrastructure" project from referencing a "domain" project — the compiler doesn't know what "infrastructure" or "domain" mean.

<!-- This compiles fine. The compiler doesn't enforce architectural layers. -->
<ProjectReference Include="..\MegaCorp.Core\MegaCorp.Core.csproj" />

4. Completeness:

There is no way to ask: "Does this project implement all the methods required by a specification?" unless you use interface implementation (: ISpec). But in ServiceLocator-heavy codebases, the interfaces are resolved at runtime, not enforced at compile time. A class can implement half an interface through separate methods called via ServiceLocator, and the compiler won't notice.

5. Traceability:

There is no compile-time mechanism to trace from a business requirement to the code that implements it. No attribute, no type reference, no compiler diagnostic says: "This class implements Feature X, Acceptance Criterion 3."


The Contracts Grab-Bag: When Interfaces Fail to Be Boundaries

In many industrial monorepos, someone tried to introduce boundaries by creating a "Contracts" project — a grab-bag of interfaces:

// MegaCorp.Contracts/IOrderService.cs
namespace MegaCorp.Contracts;

public interface IOrderService
{
    Task<OrderResult> ProcessOrder(CreateOrderCommand command);
    Task<OrderResult> CancelOrder(Guid orderId);
    Task<OrderResult> RefundOrder(Guid orderId, decimal amount);
    Task<IReadOnlyList<Order>> GetOrdersByCustomer(Guid customerId);
    Task<Order?> GetOrderById(Guid orderId);
    Task UpdateOrderStatus(Guid orderId, OrderStatus status);
    Task<decimal> CalculateOrderTotal(IReadOnlyList<OrderItem> items);
    Task ApplyDiscount(Guid orderId, string couponCode);
    Task<bool> ValidateOrder(CreateOrderCommand command);
    Task<ShippingEstimate> EstimateShipping(Guid orderId);
}

This is a 10-method interface that mixes:

  • Commands: ProcessOrder, CancelOrder, RefundOrder, UpdateOrderStatus, ApplyDiscount
  • Queries: GetOrdersByCustomer, GetOrderById, CalculateOrderTotal, EstimateShipping
  • Validation: ValidateOrder

It's not a specification — it's a remote procedure call surface. It says "here are the things you can ask the order system to do." It doesn't say:

  • What the business requires ("orders with negative totals must be rejected")
  • Which acceptance criteria exist
  • Which methods satisfy which requirement
  • What the PM considers "done"

The interface IS the implementation surface repackaged as a contract. It evolves when the implementation changes, not when the requirements change. When someone adds a new method to OrderService, they add a matching method to IOrderService. The interface follows the code, not the requirements.

This is the fundamental difference between a contract grab-bag and a specification:

// Contract grab-bag: mirrors the implementation
public interface IOrderService
{
    Task<OrderResult> ProcessOrder(CreateOrderCommand command);
    // 9 more methods that describe WHAT THE CODE DOES
}

// Specification: mirrors the requirement
[ForRequirement(typeof(OrderProcessingFeature))]
public interface IOrderProcessingSpec
{
    [ForRequirement(typeof(OrderProcessingFeature),
        nameof(OrderProcessingFeature.OrderTotalMustBePositive))]
    Result ValidateOrderTotal(Order order);

    [ForRequirement(typeof(OrderProcessingFeature),
        nameof(OrderProcessingFeature.InventoryReservedBeforePayment))]
    Result ReserveInventory(Order order);

    [ForRequirement(typeof(OrderProcessingFeature),
        nameof(OrderProcessingFeature.PaymentCapturedAfterReservation))]
    Result CapturePayment(Order order, InventoryReservation reservation);

    // Methods that describe WHAT THE BUSINESS REQUIRES
}

The grab-bag has 10 methods because the implementation has 10 operations. The specification has 3 methods because the feature has 3 acceptance criteria. The grab-bag evolves with the code. The specification evolves with the requirements. The grab-bag is a physical artifact (it mirrors the implementation surface). The specification is a logical artifact (it mirrors the business requirement).

Most industrial monorepos have the grab-bag. None of them have the specification. That's the gap.


The "Shared Library" Trap

Another common pattern: the shared library that starts small and becomes everything.

MegaCorp.Common.Utils/
├── Extensions/
│   ├── StringExtensions.cs47 extension methods
│   ├── DateTimeExtensions.cs23 extension methods
│   ├── EnumerableExtensions.cs31 extension methods
│   ├── GuidExtensions.cs8 extension methods
│   └── TaskExtensions.cs12 extension methods
├── Helpers/
│   ├── JsonHelper.cs                ← Wraps System.Text.Json
│   ├── CryptoHelper.cs             ← Wraps BCrypt
│   ├── EmailHelper.cs              ← Email validation regex
│   ├── PhoneHelper.cs              ← Phone formatting
│   ├── CurrencyHelper.cs           ← Currency conversion
│   ├── FileHelper.cs               ← File I/O wrappers
│   └── RetryHelper.cs              ← Polly retry wrapper
├── Constants/
│   ├── AppConstants.cs200+ const strings
│   ├── ErrorCodes.cs150+ error code strings
│   └── RegexPatterns.cs40+ regex patterns
├── Attributes/
│   ├── AuditableAttribute.cs
│   ├── CacheableAttribute.cs
│   └── LoggableAttribute.cs
├── Exceptions/
│   ├── BusinessException.cs
│   ├── ValidationException.cs
│   ├── NotFoundException.cs
│   ├── ConflictException.cs
│   └── UnauthorizedException.cs
└── Models/
    ├── PagedResult.cs
    ├── SortDirection.cs
    ├── ApiResponse.cs
    └── ErrorResponse.cs

Every project in the solution references MegaCorp.Common.Utils. It's the bottom of the dependency graph. Changing anything in it triggers a rebuild of the entire solution. But nobody can split it because:

  1. StringExtensions is used in 48 out of 50 projects
  2. AppConstants is used in 42 projects
  3. BusinessException is used in 38 projects

The shared library is a gravity well. Code that doesn't belong anywhere gets put here. Code that belongs somewhere specific gets put here because "it might be useful elsewhere." The library grows monotonically. It never shrinks.

This is a physical boundary problem: the library exists because it's a convenient compilation unit, not because it represents a coherent domain concept. A logical boundary would split this into:

  • Domain value types in a SharedKernel project (defined by what the business needs, not by what the code uses)
  • Infrastructure concerns in their own projects (retry, caching, logging — technical, not business)
  • Feature-specific utilities in the feature projects that need them

But without a requirements model to define what "the business needs" means, there's no principled way to split the shared library. So it stays. And grows.


The Testing Illusion

Physical-only monorepos have tests. Sometimes lots of tests. But the tests have the same structural problem as the code: they're organized by technical layer, not by business feature.

MegaCorp.Core.Tests/
├── Orders/
│   ├── OrderServiceTests.cs           ← Tests OrderService
│   ├── OrderServiceV2Tests.cs         ← Tests OrderServiceV2 (which is active?)
│   ├── OrderValidatorTests.cs         ← Tests OrderValidator
│   └── OrderPricingEngineTests.cs     ← Tests PricingEngine
├── Payments/
│   ├── PaymentProcessorTests.cs
│   └── RefundServiceTests.cs
└── Users/
    ├── UserServiceTests.cs
    └── AuthorizationServiceTests.cs

The tests mirror the implementation structure. OrderServiceTests tests OrderService. That's all the test knows. It doesn't know:

  • Which feature OrderService implements
  • Which acceptance criteria the tests cover
  • Whether all ACs for "Order Processing" are tested somewhere
  • Whether the test for OrderPricingEngine overlaps with the test for BulkDiscountCalculator in a different test project
// MegaCorp.Core.Tests/Orders/OrderServiceTests.cs
[TestFixture]
public class OrderServiceTests
{
    [Test]
    public async Task ProcessOrder_ValidInput_ReturnsSuccess()
    {
        // This test verifies... what exactly?
        // Which business requirement?
        // Which acceptance criterion?
        // If this test passes, what can we tell the PM?
        // "Your order processing works" — but which PART works?
    }

    [Test]
    public async Task ProcessOrder_NegativeTotal_ReturnsFailed()
    {
        // Is this AC-1? AC-3? Is there even a numbered AC?
        // The PM said "orders with negative totals must be rejected"
        // This test does that. But there's no compile-time link.
        // If the PM removes this AC, this test keeps running.
        // If the PM adds AC-4, no test is created.
        // The tests and requirements drift apart silently.
    }

    [Test]
    public async Task ProcessOrder_InsufficientStock_ReturnsFailed()
    {
        // Is this the same AC as the inventory test in
        // MegaCorp.InventoryService.Tests? Or a different one?
        // Nobody knows. The tests reference classes, not requirements.
    }
}

The testing structure mirrors the physical structure. Tests are in projects that match implementation projects. Test methods test implementation methods. There is no mapping from tests to requirements, no coverage analysis at the feature level, no way to ask: "Are all acceptance criteria for Order Processing tested?"

The testing is physically complete (high code coverage) but logically incomplete (no requirement coverage). You can have 95% line coverage and 0% acceptance criteria coverage. The metrics look great. The requirements are unverified.

The Mocking Trap

The ServiceLocator problem compounds in tests. When you test OrderService, you need to mock all 9 dependencies. But the mocking setup doesn't validate that the mocks match reality:

// MegaCorp.Core.Tests/Orders/OrderServiceTests.cs
[TestFixture]
public class OrderServiceTests
{
    private Mock<IServiceProvider> _mockServiceProvider;
    private Mock<IOrderValidator> _mockValidator;
    private Mock<IInventoryChecker> _mockInventory;
    private Mock<IPricingEngine> _mockPricing;
    private Mock<IInventoryReserver> _mockReserver;
    private Mock<IPaymentProcessor> _mockPayment;
    private Mock<IOrderRepository> _mockRepository;
    private Mock<IEventBus> _mockEventBus;
    private Mock<INotificationSender> _mockNotifier;
    private Mock<IAuditLogger> _mockAudit;

    [SetUp]
    public void Setup()
    {
        _mockValidator = new Mock<IOrderValidator>();
        _mockInventory = new Mock<IInventoryChecker>();
        _mockPricing = new Mock<IPricingEngine>();
        _mockReserver = new Mock<IInventoryReserver>();
        _mockPayment = new Mock<IPaymentProcessor>();
        _mockRepository = new Mock<IOrderRepository>();
        _mockEventBus = new Mock<IEventBus>();
        _mockNotifier = new Mock<INotificationSender>();
        _mockAudit = new Mock<IAuditLogger>();

        // Build a mock IServiceProvider that returns each mock
        _mockServiceProvider = new Mock<IServiceProvider>();
        _mockServiceProvider
            .Setup(sp => sp.GetService(typeof(IOrderValidator)))
            .Returns(_mockValidator.Object);
        _mockServiceProvider
            .Setup(sp => sp.GetService(typeof(IInventoryChecker)))
            .Returns(_mockInventory.Object);
        _mockServiceProvider
            .Setup(sp => sp.GetService(typeof(IPricingEngine)))
            .Returns(_mockPricing.Object);
        _mockServiceProvider
            .Setup(sp => sp.GetService(typeof(IInventoryReserver)))
            .Returns(_mockReserver.Object);
        _mockServiceProvider
            .Setup(sp => sp.GetService(typeof(IPaymentProcessor)))
            .Returns(_mockPayment.Object);
        _mockServiceProvider
            .Setup(sp => sp.GetService(typeof(IOrderRepository)))
            .Returns(_mockRepository.Object);
        _mockServiceProvider
            .Setup(sp => sp.GetService(typeof(IEventBus)))
            .Returns(_mockEventBus.Object);
        _mockServiceProvider
            .Setup(sp => sp.GetService(typeof(INotificationSender)))
            .Returns(_mockNotifier.Object);
        _mockServiceProvider
            .Setup(sp => sp.GetService(typeof(IAuditLogger)))
            .Returns(_mockAudit.Object);

        // Initialize the static ServiceLocator with the mock provider
        ServiceLocator.Reset();
        ServiceLocator.Initialize(_mockServiceProvider.Object);
    }

    [TearDown]
    public void TearDown()
    {
        ServiceLocator.Reset(); // Clean up static state between tests
    }

    [Test]
    public async Task ProcessOrder_ValidInput_ReturnsSuccess()
    {
        // Arrange: configure 9 mocks to return happy-path values
        _mockValidator.Setup(v => v.Validate(It.IsAny<CreateOrderCommand>()))
            .Returns(ValidationResult.Success());
        _mockInventory.Setup(i => i.CheckAvailability(It.IsAny<IEnumerable<StockQuery>>()))
            .ReturnsAsync(StockResult.AllAvailable(/* mock reservations */));
        _mockPricing.Setup(p => p.Calculate(It.IsAny<CreateOrderCommand>(), It.IsAny<object>()))
            .Returns(new PricedOrder { Total = 100m, Currency = "EUR" });
        _mockReserver.Setup(r => r.Reserve(It.IsAny<object>()))
            .ReturnsAsync(Guid.NewGuid());
        _mockPayment.Setup(p => p.Capture(It.IsAny<PaymentRequest>()))
            .ReturnsAsync(PaymentResult.Success("txn_123"));
        _mockRepository.Setup(r => r.Save(It.IsAny<Order>()))
            .Returns(Task.CompletedTask);
        _mockEventBus.Setup(e => e.Publish(It.IsAny<OrderCreatedEvent>()))
            .Returns(Task.CompletedTask);
        _mockNotifier.Setup(n => n.Send(It.IsAny<OrderConfirmationNotification>()))
            .Returns(Task.CompletedTask);

        var service = new OrderService();
        var command = CreateValidOrderCommand();

        // Act
        var result = await service.ProcessOrder(command);

        // Assert
        Assert.That(result.IsSuccess, Is.True);
    }
}

90 lines of setup to test a single method. And this test has several problems:

  1. The mock setup IS the specification — but it's informal, scattered across test methods, and not linked to any requirement. The mock for _mockPayment.Setup(p => p.Capture(...)).ReturnsAsync(PaymentResult.Success("txn_123")) implicitly encodes AC-3 ("payment is captured after reservation"), but there's no explicit declaration of this.

  2. Mocks can drift from reality. If IPaymentProcessor.Capture changes its behavior in production (e.g., it now returns a different error format), the mock still returns the old format. The test passes. Production breaks.

  3. Static state leaks. The ServiceLocator.Reset() in TearDown is essential — without it, test parallelism breaks because all tests share the same static ServiceLocator. This is a maintenance landmine.

  4. No coverage mapping. This test covers ProcessOrder — but which of the 6 acceptance criteria does it verify? The happy path? All of them? Some of them? The test framework doesn't know. The developer doesn't know. The PM definitely doesn't know.

Compare this to a test with logical boundaries:

// With Requirements as Code — the test is 15 lines, not 90
[Test]
[Verifies(typeof(OrderProcessingFeature),
    nameof(OrderProcessingFeature.PaymentCapturedAfterReservation))]
public async Task Payment_captured_only_after_reservation()
{
    // Arrange: use the real spec interface, not 9 mocks
    var spec = new OrderService(inventoryRepo, paymentGateway, orderRepo);
    var order = CreateValidOrder();
    var reservation = await spec.ReserveInventory(order);

    // Act
    var result = await spec.CapturePayment(order, reservation, /* payment context */);

    // Assert
    Assert.That(result.IsSuccess, Is.True);
}

The [Verifies] attribute creates a compile-time link from this test to AC-3. If the AC is renamed, the test gets a compile error. If the AC is deleted, the test gets a compile error. The Roslyn analyzer knows that AC-3 has exactly this test covering it. The traceability matrix reports it. The PM can ask "is AC-3 tested?" and get a compiler-verified answer.


The Runtime Discovery Problem

In a physical-only monorepo, understanding the system requires runtime discovery — you must run the application (or at least the DI container) to understand which concrete types implement which interfaces. This creates a class of bugs and developer experiences that don't exist when boundaries are logical.

Registration Errors

The most common runtime discovery failure: a missing DI registration.

// A developer adds a new service
public class ShippingEstimator : IShippingEstimator
{
    private readonly IShippingRateProvider _rateProvider;
    public ShippingEstimator(IShippingRateProvider rateProvider)
        => _rateProvider = rateProvider;

    public async Task<ShippingEstimate> Estimate(Order order) { ... }
}

// They register it in Program.cs
services.AddScoped<IShippingEstimator, ShippingEstimator>();

// But they forget to register the dependency
// services.AddScoped<IShippingRateProvider, FedExRateProvider>();

// The solution compiles. All tests pass (they mock IShippingRateProvider).
// In production, the first request to estimate shipping:
//
// System.InvalidOperationException:
//   Unable to resolve service for type 'IShippingRateProvider'
//   while attempting to activate 'ShippingEstimator'.

This error is discovered at runtime, in production, on a customer-facing request. The compiler could not catch it because the DI registration is a runtime configuration, not a compile-time contract.

With logical boundaries, the Roslyn analyzer scans the DI registrations and the interface implementations at compile time. If IShippingEstimator is part of a specification interface, the analyzer verifies that all dependencies are resolvable.

Lifetime Mismatches

Another runtime discovery failure: a scoped service injected into a singleton.

// Program.cs
services.AddSingleton<IBackgroundOrderProcessor, BackgroundOrderProcessor>();
services.AddScoped<IOrderRepository, EfOrderRepository>();

// BackgroundOrderProcessor receives IServiceProvider (Pattern B)
// and resolves IOrderRepository from the root scope.
// EfOrderRepository holds an EF DbContext — which is scoped.
// The DbContext is disposed after the first request scope ends.
// The singleton BackgroundOrderProcessor holds a reference to a disposed DbContext.
// Every subsequent order processing call throws ObjectDisposedException.

This bug is invisible at compile time, invisible in tests (which create fresh scopes), and surfaces only under production load when the singleton is reused across multiple requests. The physical boundaries offer no protection — the DI container doesn't validate lifetime compatibility at registration time.

The Phantom Dependency Chain

The most insidious runtime discovery problem: a dependency chain that crosses project boundaries through ServiceLocator, creating a coupling that no static analysis tool can find.

// MegaCorp.OrderService/OrderService.cs
public class OrderService
{
    public async Task<OrderResult> ProcessOrder(CreateOrderCommand cmd)
    {
        var pricing = ServiceLocator.GetService<IPricingEngine>();
        var pricedOrder = pricing.Calculate(cmd);
        // ...
    }
}

// MegaCorp.Core/PricingEngine.cs
public class PricingEngine : IPricingEngine
{
    public PricedOrder Calculate(CreateOrderCommand cmd)
    {
        // Pricing needs tax rates — resolved at runtime
        var taxService = ServiceLocator.GetService<ITaxRateService>();
        var taxRate = taxService.GetRate(cmd.ShippingAddress.Country);
        // ...
    }
}

// MegaCorp.BillingService/TaxRateService.cs
public class TaxRateService : ITaxRateService
{
    public decimal GetRate(string country)
    {
        // Tax rates come from a database table
        var repo = ServiceLocator.GetService<ITaxRateRepository>();
        return repo.GetCurrentRate(country);
        // ...
    }
}

// MegaCorp.Data/TaxRateRepository.cs
public class TaxRateRepository : ITaxRateRepository
{
    public decimal GetCurrentRate(string country)
    {
        // Queries the tax_rates table
        // ...
    }
}

The dependency chain is:

OrderService → PricingEngine → TaxRateService → TaxRateRepository → Database

This chain spans 4 projects. But the compile-time dependency graph shows:

MegaCorp.OrderService → (nothing related to tax)
MegaCorp.Core → (nothing related to billing)
MegaCorp.BillingService → (nothing related to orders)

The chain is invisible. A developer changing the tax_rates table schema has no compile-time signal that this affects order pricing. A developer removing TaxRateService sees no compile error in OrderService or PricingEngine. The dependency exists only at runtime, through three ServiceLocator.GetService<> hops.

This is not an edge case. In a 50-project monorepo with 520 hidden dependencies, phantom chains like this are the norm, not the exception. Every ServiceLocator call creates a potential phantom chain. Every phantom chain is a potential production incident waiting for someone to change a table, remove a class, or modify an interface that "nothing depends on."


The Industry's Attempted Solutions

The industry has tried to solve the physical-only boundary problem:

Solution 1: Microservices

"If the monorepo is unmaintainable, split it into microservices."

This doesn't solve the boundary problem — it moves it. Instead of ServiceLocator calls within a process, you have HTTP/gRPC calls between processes. The hidden dependencies are still hidden — they're just network calls now. The phantom chains still exist — they cross service boundaries instead of project boundaries.

And you've added: network latency, distributed transactions, eventual consistency, service discovery, circuit breakers, deployment orchestration, and 15 Kubernetes manifests. The boundary problem is the same. The operational complexity is 10x.

Solution 2: Architecture Tests (ArchUnit, NetArchTest)

// Using NetArchTest
var result = Types.InAssembly(typeof(OrderService).Assembly)
    .That().ResideInNamespace("MegaCorp.Core.Orders")
    .ShouldNot().HaveDependencyOn("MegaCorp.PaymentGateway")
    .GetResult();

Architecture tests verify that certain dependency rules hold. They're useful for preventing layering violations. But they answer "does this code follow the dependency rules?" — not "does this code implement the Order Processing feature?" They enforce physical constraints, not logical ones.

Solution 3: Domain-Driven Design (Strategic)

DDD advocates for bounded contexts — logical boundaries around domain concepts. This is the right idea. But in practice, DDD bounded contexts in .NET are implemented as... projects. Physical boundaries. The logical concept (bounded context) is mapped to a physical artifact (DLL), and the mapping is informal — it exists in the developer's mental model, not in the compiler.

The Requirements as Code approach makes the bounded context formal: a Feature type IS the bounded context. The compiler enforces it. The IDE navigates it. The mapping from logical concept to physical artifact is explicit, typed, and compiler-checked.

Solution 4: Module Systems (Java JPMS, .NET InternalsVisibleTo)

Module systems provide finer-grained access control than projects. InternalsVisibleTo lets specific assemblies access internal types. Java's JPMS declares explicit module boundaries with exports and requires.

These are better physical boundaries. They still don't provide logical boundaries. A module can declare "I export these types" — but not "I implement these business requirements" or "I satisfy these acceptance criteria."


A Taxonomy of Boundaries

To summarize everything we've seen, here is a taxonomy of boundary types in software systems:

Boundary Type Example Enforced By Answers Present in Industrial Monorepo?
File OrderService.cs File system "Which file has this code?" Yes
Folder Orders/ File system (convention) "Which folder has this file?" Yes
Namespace MegaCorp.Core.Orders C# name resolution "What's the fully qualified name?" Yes
Project/DLL MegaCorp.Core.csproj MSBuild / compiler "Which compilation unit?" Yes
Solution MegaCorp.sln IDE "Which projects are grouped?" Yes
Package MegaCorp.Core.nupkg NuGet / versioning "Which distributable unit?" Sometimes
Container Docker image OS / runtime "Which process isolation?" Sometimes
Feature OrderProcessingFeature type C# type system "Which business capability?" NO
Specification IOrderProcessingSpec interface C# compiler "What contract must be satisfied?" NO
AC → Test [Verifies(typeof, nameof)] Roslyn analyzer "Which test covers which criterion?" NO
Requirement → Code [ForRequirement(typeof)] Roslyn analyzer "Which code implements which requirement?" NO

The bottom four rows are logical boundaries. They exist in the Requirements as Code architecture. They do not exist in any typical industrial monorepo. That's the gap. That's what's missing. That's what the rest of this series is about.


The Missing Layer: Logical Boundaries

A logical boundary is a separation enforced by the type system, not the file system. It answers the question: what business capability does this code implement, and what contract does it satisfy?

Here is what a logical boundary looks like compared to a physical one:

Aspect Physical Boundary (.csproj) Logical Boundary (type system)
Enforced by MSBuild / compiler visibility C# type system / interface contracts
Granularity Per project (coarse) Per feature / per acceptance criterion (fine)
Answers "Where does this code live?" "What does this code implement?"
Refactor-safe Project rename breaks paths typeof() / nameof() survives refactoring
IDE support Solution Explorer tree "Find All References" on a Feature type
Compile-time enforcement Visibility (public/internal) Contract satisfaction (: ISpec)
Traceability Grep for project name [ForRequirement(typeof(Feature))] → type chain
Cross-project <ProjectReference> Interface implementation across projects
Discoverable Read .sln file "Find All References" on requirement type

A logical boundary is not a folder, not a namespace, not a project. It is a type — a class, an interface, an abstract record — that declares: "This is what the business requires, and the compiler will refuse to build until it's satisfied."

Diagram

The top graph has physical boundaries. The compiler enforces visibility — who can see whom. But it enforces nothing about what the code does or which business requirement it satisfies.

The bottom graph has physical AND logical boundaries. The compiler enforces both visibility and contracts. MegaCorp.OrderService doesn't just "reference" MegaCorp.Specifications — it implements IOrderProcessingSpec, and the compiler refuses to build until every method on that interface is satisfied. The specification interface is the logical boundary. The <ProjectReference> is the physical one. Together, they create real architecture.


The Cost of Physical-Only Boundaries

What does it cost, concretely, to have 50 DLLs and no logical boundaries?

Cost 1: The "Which Code?" Problem

New developer: "I need to add a discount validation to order processing. Where does that go?"

Senior developer: "It depends. The order validation is in MegaCorp.Core/Orders/OrderValidator.cs, but pricing including discounts is in MegaCorp.Core/Orders/OrderPricingEngine.cs, except for coupon discounts which are in MegaCorp.Core/Promotions/CouponEngine.cs, and bulk discounts are calculated in MegaCorp.BillingService/BulkDiscountCalculator.cs because someone put them there during the billing refactor."

New developer: "How do I know if my change is correct? What are the acceptance criteria?"

Senior developer: "Check Jira. I think it's MEGA-4521. Or maybe MEGA-5102. The PM changed the requirements in a comment on the original ticket."

This conversation happens every sprint. It costs 30 minutes to 2 hours per instance. In a team of 15, it happens 3-5 times per week. That's 5-10 hours per week of "where does this code live?" — time that produces zero business value.

Cost 2: The "What Broke?" Problem

A developer changes OrderPricingEngine.cs to fix a rounding issue. The change is correct for the order processing flow. But BulkDiscountCalculator.cs in MegaCorp.BillingService also calls OrderPricingEngine through the ServiceLocator — and the rounding change breaks bulk invoice calculations. The developer didn't know about this dependency because:

  1. MegaCorp.BillingService does not have a <ProjectReference> to MegaCorp.Core. The call goes through ServiceLocator.GetService<IPricingEngine>().
  2. There is no compile-time link between "order pricing" and "billing pricing." They share a class through runtime resolution.
  3. The tests for BillingService mock IPricingEngine, so the test suite passes. The bug appears in production.

This is not hypothetical. This is the Tuesday afternoon incident report in every industrial monorepo.

Cost 3: The "Dead Code?" Problem

After 10 years, MegaCorp has:

  • OrderService.cs and OrderServiceV2.cs — which one is active?
  • LegacyOrderHandler.cs — is this still called? By what?
  • OrderProcessor.cs — is this the new one or the old one?
  • IOrderService, IOrderProcessor, IOrderHandler, IOrderCommandHandler — four interfaces for the same concept?

Nobody deletes anything because nobody knows what's still in use. The ServiceLocator makes it impossible to determine — a ServiceLocator.GetService<IOrderHandler>() call could be hiding in any file in any project. Removing IOrderHandler might break a runtime resolution in a background worker that only runs on the first Monday of each month.

The codebase grows. The dead code accumulates. The cognitive load increases. The build time increases. Nobody can clean up because nobody can prove what's unused.

Cost 4: The "Onboarding Wall" Problem

A new developer's first week at MegaCorp:

Day Activity Value Produced
Day 1 Open solution, wait 40s. Read the README (last updated 2021). Zero
Day 2 Ask "what's the architecture?" Get a whiteboard session. Whiteboard doesn't match code. Zero
Day 3 Try to understand order processing. Find 4 OrderService variants. Ask which is active. Zero
Day 4 Start reading OrderService.ProcessOrder. Discover ServiceLocator calls. Trace each one manually. Zero
Day 5 Finally find the Jira ticket. Read the requirements. Realize they don't match the code. Start coding. Marginal

Five days before a new developer can make a meaningful contribution. With logical boundaries (requirements as types, specifications as interfaces), the same onboarding looks like:

Day Activity Value Produced
Day 1 Open solution. Navigate to MegaCorp.Requirements. See all features as types. Understanding
Day 2 Ctrl+Click OrderProcessingFeature. See the ACs. "Find All References" → see which services implement it, which tests verify it. Full context
Day 3 Start coding. The compiler tells them if they break a spec. Code shipped

Two days instead of five. That's the difference between physical and logical boundaries.


The DLL Boundary Mismatch

Here is the fundamental mismatch, visualized:

Diagram

The PM thinks there are 5 features. The build system thinks there are 9 DLLs. The features and the DLLs have a many-to-many relationship. "Order Processing" spans 8 DLLs. MegaCorp.Core.dll contains parts of all 5 features. There is no 1:1 mapping between business features and physical artifacts.

This is the mismatch. Physical boundaries cut along technical layers (Web, Core, Data, Infrastructure). Business features cut across those layers. A feature like "Order Processing" has a controller (Web), business logic (Core), persistence (Data), payment integration (PaymentGateway), stock management (InventoryService), email confirmation (NotificationService), async fulfillment (Worker), and invoice generation (BillingService). The DLL structure doesn't express this. It can't express this. That's not what DLLs are for.

DLLs are packaging. Features are architecture. They're orthogonal concepts. Confusing them — treating DLL boundaries as feature boundaries — is the structural error that makes industrial monorepos unmaintainable.


The Cross-Cutting Feature Problem

The mismatch isn't just academic. It creates a specific class of bugs and organizational failures that are unique to physical-only monorepos.

Scenario: The Cross-Team Feature Change

The PM requests a change to Order Processing: "When an order is placed, the customer's loyalty points should be credited."

This is a new acceptance criterion. In a requirements-as-code system, the developer adds an abstract method to OrderProcessingFeature:

/// AC-7: Loyalty points are credited after successful order.
public abstract AcceptanceCriterionResult LoyaltyPointsCredited(
    Order order, CustomerId customer, int pointsEarned);

The build breaks in every project that implements IOrderProcessingSpec until the new AC is satisfied. Teams are notified by the compiler. The change is coordinated structurally.

In the physical-only monorepo, here's what happens:

  1. PM creates Jira ticket MEGA-6001: "Credit loyalty points after order placement."
  2. Team A (owns MegaCorp.Core) adds loyalty point calculation to OrderService.ProcessOrder().
  3. Team A resolves ILoyaltyService via ServiceLocator. They don't know who owns ILoyaltyService.
  4. Team B (owns MegaCorp.UserService) implements LoyaltyService : ILoyaltyService but puts it in MegaCorp.UserService because "loyalty is a user concept."
  5. Team C (owns MegaCorp.BillingService) needs loyalty point adjustments for refunds. They don't know Team B already created LoyaltyService. They create BillingLoyaltyCalculator in MegaCorp.BillingService.
  6. Team D (owns MegaCorp.NotificationService) needs to send "You earned X points!" emails. They reference ILoyaltyService via ServiceLocator, not knowing there are now two implementations.
  7. QA writes tests for the happy path. Nobody tests the refund path because nobody knows BillingLoyaltyCalculator exists.

Three months later, a customer reports: "I placed an order, got refunded, but my loyalty points weren't deducted." The bug exists because:

  • Team A, B, C, and D each implemented their part correctly
  • Nobody knew about the other teams' implementations
  • The physical boundaries gave no signal that "loyalty points" was a cross-cutting concern touching 4 teams
  • The Jira ticket was assigned to Team A. Teams B, C, and D added their changes as "related work" without linking to the original ticket.

With logical boundaries: The AC LoyaltyPointsCredited is an abstract method on OrderProcessingFeature. The IOrderProcessingSpec interface includes a method for it. Every team that implements a spec touching loyalty points is forced by the compiler to acknowledge and implement this AC. The Roslyn analyzer reports which teams have implemented it and which haven't. The build doesn't pass until everyone has.

Scenario: The Silent Contract Change

A developer in Team A changes the return type of IPricingEngine.Calculate:

// Before:
public interface IPricingEngine
{
    PricedOrder Calculate(CreateOrderCommand command, IReadOnlyList<Reservation> reservations);
}

// After (Team A change):
public interface IPricingEngine
{
    PricedOrder Calculate(CreateOrderCommand command, IReadOnlyList<Reservation> reservations,
        DiscountContext? discountContext = null);  // New optional parameter
}

The optional parameter means existing callers compile fine — no breaking change. But BulkDiscountCalculator in MegaCorp.BillingService calls IPricingEngine.Calculate and passes a DiscountContext that was previously handled separately. Now there are two code paths for discount application: the old one in BulkDiscountCalculator and the new one in PricingEngine. For some orders, discounts are applied twice.

The physical boundary didn't help: the interface change was backward-compatible. The compiler saw no error. The tests passed because they use mocks. The bug appeared in production when a specific combination of bulk order + coupon code triggered both discount paths.

With logical boundaries: The acceptance criterion OrderTotalCalculatedCorrectly has a specific signature. The specification interface IOrderProcessingSpec.CalculateTotal is the only contract. Changes to pricing go through the spec interface, not through an internal IPricingEngine that multiple teams happen to resolve via ServiceLocator. The specification is the contract. The contract is reviewed when it changes. The phantom dependency chain doesn't exist because the chain is explicit in the type system.

Scenario: The Compliance Audit

An auditor asks: "Show me all the code that handles payment processing, and prove that each acceptance criterion is tested."

In the physical-only monorepo:

Auditor: "Which code handles payment processing?"
Team: *spends 3 days compiling a spreadsheet*

The spreadsheet:
| File | Project | Description | Last Modified |
|------|---------|-------------|---------------|
| PaymentProcessor.cs | PaymentGateway | Main payment flow | 2024-03-15 |
| StripePaymentProcessor.cs | PaymentGateway | Stripe integration | 2024-02-28 |
| PaymentValidator.cs | Core | Payment validation | 2023-11-20 |
| OrderPaymentCapture.cs | PaymentGateway | Order payment capture | 2024-01-10 |
| RefundService.cs | Core | Refund processing | 2024-03-01 |
| BillingPaymentHandler.cs | BillingService | Invoice payments | 2023-09-15 |
| PaymentEventHandler.cs | EventBus | Payment events | 2023-12-01 |
| ... | ... | ... | ... |

Auditor: "Are all acceptance criteria tested?"
Team: "We have 87% code coverage."
Auditor: "I didn't ask about code coverage. I asked about acceptance criteria."
Team: *silence*

The team cannot answer the auditor's question because the acceptance criteria exist in Jira, not in the code. There is no compile-time mapping from "AC: Payment amount must match order total" to the code that enforces it or the test that verifies it.

With logical boundaries:

Auditor: "Which code handles payment processing?"
Team: *runs dotnet build → TraceabilityMatrix.g.cs*

Generated traceability:
  PaymentProcessingFeature (6 ACs)
  ├── AC-1: PaymentAmountMatchesOrderTotal
  │   ├── Spec: IPaymentProcessingSpec.ValidateAmount
  │   ├── Impl: StripePaymentProcessor.ValidateAmount [MegaCorp.PaymentGateway]
  │   └── Test: PaymentTests.Amount_matches_order_total [Verifies]
  ├── AC-2: RefundDoesNotExceedOriginal
  │   ├── Spec: IPaymentProcessingSpec.ValidateRefund
  │   ├── Impl: RefundService.ValidateRefund [MegaCorp.Core]
  │   └── Test: RefundTests.Refund_cannot_exceed_original [Verifies]
  ├── AC-3: PaymentFailureRollsBackOrder
  │   ├── Spec: IPaymentProcessingSpec.HandleFailure
  │   ├── Impl: OrderPaymentCapture.HandleFailure [MegaCorp.PaymentGateway]
  │   └── Test: PaymentTests.Failed_payment_rolls_back_order [Verifies]
  └── ... (3 more ACs, all with Spec → Impl → Test chain)

Coverage: 6/6 ACs implemented, 6/6 ACs tested, 100% requirement coverage.

Auditor: "Show me the test for AC-2."
Team: *Ctrl+Click on RefundTests.Refund_cannot_exceed_original*
Auditor: "Satisfactory."

The audit takes 5 minutes instead of 3 days. The answer is compiler-verified, not manually compiled. This is not a luxury — for regulated industries (medical devices under IEC 62304, automotive under ISO 26262, aviation under DO-178C), this level of traceability is a legal requirement.


What Needs to Change

The industrial monorepo doesn't need more DLLs. It doesn't need microservices. It doesn't need a rewrite. It needs a new kind of boundary — one that says:

  1. "This is a feature." — A type that represents the business capability, with acceptance criteria as abstract methods.

  2. "This is the contract." — An interface that specifies what the domain must implement for this feature. The compiler enforces it.

  3. "This code implements this feature." — An attribute or interface implementation that creates a compile-time link from code to requirement. Ctrl+Click navigable. Refactor-safe.

  4. "This test verifies this acceptance criterion." — A typed reference from test to requirement. The compiler checks that the referenced AC exists.

That's what a .Requirements project and a .Specifications project provide. They create logical boundaries — compiler-enforced, IDE-navigable, refactor-safe semantic boundaries that tell every developer, every tool, and every build pipeline exactly which code belongs to which business feature.

Physical boundaries stay. DLLs still separate compilation. <ProjectReference> still controls visibility. Docker containers still isolate processes. But they stop being the architecture. They become what they always were: packaging.

The architecture lives in the type system.


Case Study: Tracing "Order Processing" — Before and After

Let's make this concrete. A developer needs to understand the "Order Processing" feature. Here's the experience with physical-only boundaries versus physical + logical boundaries.

With Physical Boundaries Only

Step 1: Find the code.

# Developer searches for "Order" across the solution
grep -r "Order" --include="*.cs" -l | wc -l
# Result: 247 files mention "Order"

247 files. Most are irrelevant (OrderBy LINQ clauses, OrderDirection enums, reorder methods). The developer filters:

grep -r "class.*Order" --include="*.cs" -l
# Result: 34 files define classes with "Order" in the name

34 classes. Which ones are relevant to "Order Processing"? The developer must open each one and read it:

MegaCorp.Core/Orders/OrderService.cs              ← Probably relevant
MegaCorp.Core/Orders/OrderServiceV2.cs             ← Probably relevant? Which is active?
MegaCorp.Core/Orders/OrderValidator.cs             ← Relevant
MegaCorp.Core/Orders/OrderPricingEngine.cs         ← Relevant
MegaCorp.Core/Orders/LegacyOrderHandler.cs         ← Dead code? Active?
MegaCorp.Core/Orders/OrderEventHandler.cs          ← Relevant
MegaCorp.Core/Orders/OrderProcessor.cs             ← What's this? Different from OrderService?
MegaCorp.Data/Repositories/OrderRepository.cs      ← Relevant
MegaCorp.Data/ReadModels/OrderReadModel.cs         ← Maybe relevant
MegaCorp.Contracts/IOrderService.cs                ← The grab-bag interface
MegaCorp.Contracts/OrderDto.cs                     ← Data transfer
MegaCorp.Contracts/CreateOrderCommand.cs           ← Input model
MegaCorp.Contracts/OrderResult.cs                  ← Output model
MegaCorp.Contracts/OrderStatus.cs                  ← Enum
MegaCorp.Contracts/V2/OrderDtoV2.cs                ← New version? Old?
MegaCorp.Web/Controllers/OrderController.cs        ← API endpoint
MegaCorp.Web/Controllers/OrderAdminController.cs   ← Admin endpoint
MegaCorp.Worker/Handlers/OrderFulfillmentHandler.cs ← Background processing
MegaCorp.PaymentGateway/OrderPaymentCapture.cs     ← Payment integration
MegaCorp.InventoryService/OrderStockReserver.cs    ← Inventory integration
MegaCorp.NotificationService/OrderNotifier.cs      ← Email integration
MegaCorp.BillingService/OrderInvoiceGenerator.cs   ← Billing integration
MegaCorp.AuditService/Handlers/OrderAuditHandler.cs ← Audit logging
... and 11 more

Time spent: 45 minutes. The developer has a list of files but no understanding of which ones are critical, which are dead code, and which acceptance criteria they satisfy.

Step 2: Find the requirements.

The developer asks a team lead. The team lead says "check Jira." The developer searches Jira for "order processing":

  • MEGA-1205: "As a user, I want to place an order" (2019, Closed)
  • MEGA-1842: "Order processing improvements" (2020, Closed)
  • MEGA-2501: "Refactor order processing to V2" (2021, In Progress — abandoned)
  • MEGA-3209: "Order processing — add discount support" (2022, Closed)
  • MEGA-4521: "Order processing — negative total bug" (2023, Closed)
  • MEGA-5102: "Update order processing for new payment provider" (2024, In Review)

Six tickets over five years. The requirements evolved through comments, sub-tasks, and linked issues. The "acceptance criteria" field on MEGA-1205 says:

Given a valid order with positive total
When the user submits the order
Then the order is created and a confirmation email is sent

Given an order with negative total
When the user submits the order
Then the order is rejected with an error message

These criteria were written in 2019. Since then, inventory reservation, payment capture, bulk discounts, and audit logging were added. The ACs were never updated. They describe a fraction of what "order processing" actually does today.

Time spent: 30 minutes. The developer has outdated requirements and no link from requirements to code.

Step 3: Understand the call chain.

The developer opens OrderService.ProcessOrder() and traces the ServiceLocator calls. They draw a diagram on paper:

OrderController
  → OrderService.ProcessOrder()
    → ServiceLocator → IOrderValidator (MegaCorp.Core)
    → ServiceLocator → IInventoryChecker (MegaCorp.InventoryService)
    → ServiceLocator → IPricingEngine (MegaCorp.Core)
    → ServiceLocator → IInventoryReserver (MegaCorp.InventoryService)
    → ServiceLocator → IPaymentProcessor (MegaCorp.PaymentGateway)
    → ServiceLocator → IOrderRepository (MegaCorp.Data)
    → ServiceLocator → IEventBus (MegaCorp.EventBus.RabbitMQ)
    → ServiceLocator → INotificationSender (MegaCorp.NotificationService)
    → ServiceLocator → IAuditLogger (MegaCorp.AuditService)

Then they discover OrderServiceV2 has a similar but different call chain. And OrderProcessor has yet another one. Three implementations, overlapping but not identical. Which one is the production path?

// MegaCorp.Web/Controllers/OrderController.cs
public class OrderController : ControllerBase
{
    private readonly IServiceProvider _sp;

    public OrderController(IServiceProvider sp) => _sp = sp;

    [HttpPost]
    public async Task<IActionResult> CreateOrder([FromBody] CreateOrderCommand cmd)
    {
        // Which OrderService is resolved here?
        // The developer must check Startup.cs / Program.cs
        // to find the DI registration.
        var service = _sp.GetRequiredService<IOrderService>();
        var result = await service.ProcessOrder(cmd);
        return result.IsSuccess ? Ok(result) : BadRequest(result);
    }
}

The developer checks Program.cs:

// Line 247 of a 500-line Program.cs
services.AddScoped<IOrderService, OrderServiceV2>();
// Wait — why V2? Was V1 deprecated? Is OrderProcessor something else?
// A comment three lines above says:
// TODO: Switch back to OrderService when pricing bug is fixed (2022-03-15)
// It's now 2026. The TODO is 4 years old.

Time spent: 1.5 hours. The developer now understands the production call chain but still doesn't know which acceptance criteria exist or which tests cover them.

Total time to understand "Order Processing": ~2.5 hours. And this understanding is ephemeral — it exists only in the developer's head and on a paper diagram that will be thrown away.

With Physical + Logical Boundaries

Step 1: Find the feature.

// The developer opens MegaCorp.Requirements and types "Order"
// IDE autocomplete shows: OrderProcessingFeature
// Ctrl+Click → jumps to:

namespace MegaCorp.Requirements.Features;

public abstract record OrderProcessingFeature : Feature<ECommerceEpic>
{
    public override string Title => "Order Processing";
    public override RequirementPriority Priority => RequirementPriority.Critical;
    public override string Owner => "order-team";

    /// AC-1: Orders with negative or zero total are rejected before payment.
    public abstract AcceptanceCriterionResult OrderTotalMustBePositive(
        Order order);

    /// AC-2: Inventory is reserved before payment is captured.
    public abstract AcceptanceCriterionResult InventoryReservedBeforePayment(
        Order order, IReadOnlyList<StockReservation> reservations);

    /// AC-3: Payment is captured only after successful inventory reservation.
    public abstract AcceptanceCriterionResult PaymentCapturedAfterReservation(
        Order order, InventoryReservation reservation, PaymentResult payment);

    /// AC-4: Order confirmation is sent after successful payment.
    public abstract AcceptanceCriterionResult ConfirmationSentAfterPayment(
        Order order, PaymentResult payment, NotificationResult notification);

    /// AC-5: All order operations are recorded in the audit log.
    public abstract AcceptanceCriterionResult AllOperationsAudited(
        Order order, IReadOnlyList<AuditEntry> auditEntries);

    /// AC-6: Failed payments release inventory reservations.
    public abstract AcceptanceCriterionResult FailedPaymentReleasesInventory(
        Order order, PaymentResult failedPayment, InventoryReservation reservation);
}

Time spent: 30 seconds. The developer knows exactly what "Order Processing" means: 6 acceptance criteria, each with a typed signature that describes the inputs.

Step 2: Find the implementations.

Right-click OrderProcessingFeature → "Find All References":

OrderProcessingFeature                                    ← Definition
├── IOrderProcessingSpec                                  ← Specification interface
│   ├── .ValidateOrderTotal[ForRequirement(..., AC-1)]
│   ├── .ReserveInventory[ForRequirement(..., AC-2)]
│   ├── .CapturePayment[ForRequirement(..., AC-3)]
│   ├── .SendConfirmation[ForRequirement(..., AC-4)]
│   ├── .RecordAudit[ForRequirement(..., AC-5)]
│   └── .ReleaseOnFailure[ForRequirement(..., AC-6)]
├── OrderService : IOrderProcessingSpec                   ← MegaCorp.OrderService
├── PaymentCaptureService : IPaymentIntegrationSpec       ← MegaCorp.PaymentGateway
├── InventoryReserver : IInventoryIntegrationSpec          ← MegaCorp.InventoryService
├── OrderNotifier : INotificationIntegrationSpec           ← MegaCorp.NotificationService
└── OrderProcessingTests                                  ← [TestsFor(typeof(OrderProcessingFeature))]
    ├── .Negative_total_is_rejected[Verifies(..., AC-1)]
    ├── .Inventory_reserved_before_payment[Verifies(..., AC-2)]
    ├── .Payment_captured_after_reservation[Verifies(..., AC-3)]
    ├── .Confirmation_sent_after_payment[Verifies(..., AC-4)]
    ├── .All_operations_audited[Verifies(..., AC-5)]
    └── .Failed_payment_releases_inventory[Verifies(..., AC-6)]

Time spent: 10 seconds. One "Find All References" click. The developer sees every specification method, every implementation, every test — and which acceptance criterion each one covers.

Step 3: Understand the chain.

The chain IS the type system. The developer doesn't need to trace ServiceLocator calls, read Program.cs, or grep for class names. The compiler has already verified that:

  • IOrderProcessingSpec has 6 methods (one per AC)
  • OrderService implements IOrderProcessingSpec (compiler-enforced)
  • OrderProcessingTests has [Verifies] for all 6 ACs (analyzer-verified)

Total time to understand "Order Processing": ~1 minute. And this understanding is permanent — it's in the type system, verified by the compiler, and navigable in the IDE.

Aspect Physical Only Physical + Logical
Find the code 45 min (grep + read 34 files) 30 sec (Find All References)
Find the requirements 30 min (search Jira, read 6 tickets) 30 sec (read Feature type)
Understand the chain 1.5 hours (trace ServiceLocator calls) 10 sec (type hierarchy IS the chain)
Total ~2.5 hours ~1 minute
Confidence Low (outdated Jira, dead code, unclear call chain) High (compiler-verified, IDE-navigable)
Persistence In developer's head (lost when they switch context) In the type system (permanent)

The Fundamental Insight

Physical boundaries answer: "Where?"

  • Where does this code live? (In MegaCorp.Core.dll)
  • Where is this class? (In MegaCorp.Core/Orders/OrderService.cs)
  • Where is this project? (In src/MegaCorp.Core/)

Logical boundaries answer: "What?" and "Why?"

  • What business feature does this code implement? (Order Processing)
  • What acceptance criteria exist? (6, each with a typed signature)
  • What tests verify each criterion? ([Verifies] with typeof + nameof)
  • Why does this code exist? (To satisfy AC-3: PaymentCapturedAfterReservation)

Industrial monorepos have answers to "where?" everywhere. They have answers to "what?" and "why?" nowhere — except in the heads of developers who happen to remember, and in Jira tickets that nobody updates.

The Requirements as Code architecture puts the "what" and "why" into the type system. The compiler enforces them. The IDE navigates them. They survive developer turnover, team reorganizations, and the passage of time.

That's not a nice-to-have. That's the difference between a 50-project monorepo that works and a 50-project monorepo that nobody can maintain.


The Complete Picture: Physical vs Logical at Every Level

Here is every dimension of the comparison, from developer experience to organizational impact:

Developer Experience

Activity Physical Only Physical + Logical
"Where does this feature live?" Grep across 50 projects typeof(Feature) → Find All References
"What are the acceptance criteria?" Search Jira (maybe outdated) Read the Feature type (compiler-verified)
"Who implements this?" Trace ServiceLocator calls : ISpec implementations (compiler-enforced)
"What tests cover AC-3?" Unknown (tests reference classes, not ACs) [Verifies(typeof, nameof)] chain
"Is this code dead?" Unknown (might be resolved via ServiceLocator) No [ForRequirement] → compiler diagnostic
"Can I safely refactor this?" Run and pray Compiler tells you what breaks
"What's the impact of my change?" Unknown until runtime TraceabilityMatrix shows affected features

Build and CI

Aspect Physical Only Physical + Logical
Incremental build scope Physical dependency graph (over-broad) Feature-aware (precise)
Test selection "Test everything that compiled" "Test features affected by this change"
Coverage metric Line coverage (87% says nothing about requirements) Requirement coverage (6/6 ACs tested)
Build failure signal "Compilation error in file X" "Feature Y, AC-3 unsatisfied"
Quality gate "All tests pass" "All ACs implemented, specified, and tested"

Team and Organization

Aspect Physical Only Physical + Logical
Feature ownership Informal ("Team A owns the Orders folder") Explicit (owner property on Feature type)
Cross-team coordination Jira tickets and Slack messages Compiler errors when shared AC changes
Onboarding 3-5 days to understand one feature 1 minute: read Feature → Find All References
Knowledge retention In developers' heads (lost on turnover) In the type system (permanent)
Audit and compliance 3-day manual spreadsheet exercise dotnet buildTraceabilityMatrix.g.cs
PM visibility "Is Feature X done?" → "Let me check with the team" "Is Feature X done?" → check build report

Architecture and Maintenance

Aspect Physical Only Physical + Logical
Boundaries Physical (DLL, project, folder) Physical + Logical (Feature types, spec interfaces)
Coupling detection Static analysis on project references Static analysis on project references + requirement references
Dead code detection Unreliable (ServiceLocator hides references) Reliable (logical boundaries are compile-time)
Refactoring safety Low (hidden dependencies break at runtime) High (all dependencies are typed)
Documentation External (wiki, Jira, Confluence) In the code (Feature types ARE the documentation)
Architecture drift Invisible until production incident Visible as compiler errors

The Path Forward

This post has been diagnostic. We've dissected the industrial monorepo — its physical-only boundaries, its ServiceProvider god-objects, its hidden dependency chains, its phantom couplings. We've measured the cost: developer hours lost, bugs shipped, audits failed, onboarding slowed.

The next post is prescriptive. Part 3: Requirements and Specifications ARE Projects shows exactly how to add logical boundaries to a 50-project monorepo:

  • A .Requirements project where features are types and acceptance criteria are abstract methods
  • A .Specifications project where contracts are interfaces that the compiler enforces
  • The complete OrderProcessingFeature with 6+ ACs spanning 5 services
  • The typed chain from requirement to specification to implementation to test
  • The Roslyn analyzer that generates the traceability matrix, the compliance report, and the compiler diagnostics

Physical boundaries don't go away. DLLs still exist. Projects still exist. <ProjectReference> still controls visibility. But they stop pretending to be architecture. They become what they always were: packaging.

The architecture moves to where it belongs: the type system.


Summary

What We Covered Key Takeaway
Physical boundaries (DLLs, projects, folders) Packaging, not architecture. They answer "where?" not "what?"
ServiceLocator (static) The original god-object. Hides all dependencies behind a static class.
IServiceProvider injection The same god-object with a constructor parameter. Looks like DI, acts like ServiceLocator.
Hidden dependency graph 4.3x larger than the compile-time graph. 520 runtime resolutions vs. 120 project references.
DI registration (Program.cs) The real architecture. 500 lines of runtime routing. No compile-time verification.
Contract grab-bags Interfaces that mirror implementations, not requirements. They evolve with the code, not the business.
Testing illusion 95% code coverage, 0% requirement coverage. Tests reference classes, not acceptance criteria.
Runtime discovery Missing registrations, lifetime mismatches, phantom dependency chains — all invisible at compile time.
Industry solutions Microservices (move the problem), architecture tests (enforce physical rules), DDD (right idea, no compiler enforcement).
The DLL mismatch Features and DLLs have a many-to-many relationship. Features cut across layers. DLLs cut along layers.
Cross-cutting scenarios Cross-team features, silent contract changes, compliance audits — all broken by physical-only boundaries.
The gap Logical boundaries — Feature types, spec interfaces, requirement-to-code links — are missing from every industrial monorepo.

Previous: Part 1 — The Industrial Monorepo Nobody Planned

Next: Part 3 — Requirements and Specifications ARE Projects