Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Performance, Caching & Read Models

A code generator's reputation lives or dies on the runtime characteristics of the code it emits. Generated code that compiles fast and runs slow is worse than hand-written code that compiles slow and runs fast — at least the hand-written version can be optimized in the spots that matter. The CMF takes the position that the generator must produce code whose default performance is acceptable, with explicit knobs for the cases where defaults are not enough. This part documents the knobs.

The four mechanisms covered: eager-load strategies for the generated repositories, EF query profiling baked into the build, [Cacheable] queries with domain-event invalidation, and materialized read models as separate aggregates fed by Stage 4 projection generators. Search indexing rounds out the chapter because it follows the same event-driven invalidation pattern.

The N+1 Problem, by Default

The first thing any generated repository must avoid is the lazy-loading trap. By default, the CMF generates repositories that disable EF Core lazy loading and instead emit explicit Include chains derived from the [Composition] and [Association] declarations. For an Order aggregate with Lines and ShippingAddress as compositions, the generated OrderRepository.GetAsync looks like this:

public async Task<Order?> GetAsync(OrderId id, CancellationToken ct = default)
{
    return await _ctx.Orders
        .AsNoTracking()                              // generated default for Get
        .Include(o => o.Lines)                       // [Composition] → eager
        .Include(o => o.ShippingAddress)             // [Composition] → eager
        .Where(o => EF.Property<OrderId>(o, "_id") == id)
        .FirstOrDefaultAsync(ct);
}

[Association] properties (e.g. CustomerId) do not get an Include because the foreign-key value is sufficient — the customer is loaded by a separate query if and when the caller asks for it. This is the right default for DDD: an aggregate boundary is a transactional boundary, not a query boundary, and crossing it should be explicit.

Two analyzers police the result. CMF601 flags any hand-written LINQ over a [Composition] collection that does not include the parent's owned types (because that path would re-introduce lazy loading). CMF602 flags any Include chain that crosses an aggregate boundary, on the theory that aggregate-spanning includes are a smell that should become a separate query handler.

EF Query Profiling, Baked Into the Build

The CMF ships with a build-time interceptor that runs every generated repository method against an in-memory Postgres (Testcontainers) and captures the executed SQL. The output lands in artifacts/reports/queries.md:

$ cmf report queries
  ✓ artifacts/reports/queries.md

  ## OrderRepository.GetAsync
  Plan cost:        2.34
  Estimated rows:   1
  Indexes used:     orders_pkey, order_lines_order_id_idx, shipping_addresses_order_id_idx
  Joins:            2 (LEFT OUTER JOIN order_lines, LEFT OUTER JOIN shipping_addresses)
  Status:           ✓ within budget (target < 10)

  ## OrderRepository.ListByCustomerAsync
  Plan cost:       142.7
  Estimated rows:  4,200
  Indexes used:    orders_customer_id_idx
  Joins:           2 (the unbounded Includes start to bite)
  Status:          ⚠ above budget (target < 50) — consider a [ProjectionFor] read model

The "budget" is configurable per repository method via [QueryBudget(MaxCost = 10)] on the partial method declaration. A query exceeding its budget emits a warning at build time, so a regression in query plan cost shows up in the same place as a unit-test failure. Importantly, this profiling runs against the generated SQL — there is no manual benchmarking of hand-tuned queries because the generator produces them all.

[Cacheable] Query Handlers

Caching is opt-in at the query handler level. The CMF emits a [Cacheable] attribute that wraps the handler in a generated cache decorator:

[QueryHandler]
[Cacheable(Region = "products", AbsoluteExpirationSeconds = 300, Tags = new[] { "product" })]
public partial class GetProductBySlugQueryHandler
    : IQueryHandler<GetProductBySlugQuery, Result<ProductDto>>
{
    public partial Task<Result<ProductDto>> HandleAsync(GetProductBySlugQuery q, CancellationToken ct);
}

The generator emits two files:

// GetProductBySlugQueryHandler.Cached.g.cs
public sealed class GetProductBySlugQueryHandler_Cached
    : IQueryHandler<GetProductBySlugQuery, Result<ProductDto>>
{
    private readonly GetProductBySlugQueryHandler _inner;
    private readonly IDistributedCache _cache;
    private readonly ICacheKeyBuilder _keys;

    public async Task<Result<ProductDto>> HandleAsync(GetProductBySlugQuery q, CancellationToken ct)
    {
        var key = _keys.Build("products", q);
        var cached = await _cache.GetAsync(key, ct);
        if (cached is not null) return Cbor.Deserialize<Result<ProductDto>>(cached);

        var fresh = await _inner.HandleAsync(q, ct);
        if (fresh.IsSuccess)
            await _cache.SetAsync(key, Cbor.Serialize(fresh),
                new DistributedCacheEntryOptions { AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(300) },
                ct);
        return fresh;
    }
}

// CacheRegistration.g.cs
public static partial class CacheRegistration
{
    public static void RegisterCacheableHandlers(IServiceCollection s)
    {
        s.Decorate<IQueryHandler<GetProductBySlugQuery, Result<ProductDto>>,
                  GetProductBySlugQueryHandler_Cached>();
        // ...one Decorate per [Cacheable] handler
    }
}

The decorator pattern means the underlying handler is unchanged — caching is purely additive. Removing the [Cacheable] attribute removes the decorator on the next build with no other code changes.

Event-Driven Invalidation

Cache invalidation is the harder problem. The CMF solves it by tying invalidation to domain events, which the DDD generator already emits. Each [Cacheable] declares the tags it invalidates on, and the generator wires up an IDomainEventHandler that purges those tags when relevant events fire:

// GeneratedCacheInvalidators.g.cs
public sealed class ProductChangedCacheInvalidator
    : IDomainEventHandler<ProductPublishedEvent>,
      IDomainEventHandler<ProductUpdatedEvent>,
      IDomainEventHandler<ProductDeletedEvent>
{
    private readonly IDistributedCache _cache;
    private readonly ICacheTagIndex _tags;

    public async Task HandleAsync(ProductPublishedEvent e, CancellationToken ct)
        => await InvalidateAsync(e.ProductId, ct);
    public async Task HandleAsync(ProductUpdatedEvent e, CancellationToken ct)
        => await InvalidateAsync(e.ProductId, ct);
    public async Task HandleAsync(ProductDeletedEvent e, CancellationToken ct)
        => await InvalidateAsync(e.ProductId, ct);

    private async Task InvalidateAsync(ProductId id, CancellationToken ct)
    {
        var keys = await _tags.GetKeysAsync("product", ct);
        foreach (var key in keys) await _cache.RemoveAsync(key, ct);
    }
}

The generator decides which events trigger invalidation by walking the event hierarchy: any event whose name starts with the aggregate's name (ProductPublishedEvent, ProductUpdatedEvent, ...) and any event explicitly tagged [InvalidatesTag("product")]. This is conservative — it occasionally over-invalidates — but it is correct, which matters more for cache invalidation than minimal churn.

Materialized Read Models

Some queries are too expensive to cache because their inputs change too often, or their cache key would have unbounded cardinality. The CMF supports CQRS-style materialized read models as separate aggregates that are updated by domain-event projections.

A read model is declared like any other aggregate, but with [ProjectionFor]:

[AggregateRoot("ProductCatalogView", BoundedContext = "ReadModel")]
[ProjectionFor(typeof(Product), typeof(Category), typeof(Inventory))]
public partial class ProductCatalogView
{
    [EntityId] public partial ProductCatalogViewId Id { get; }
    [Property("Slug",        Required = true)] public partial string Slug { get; }
    [Property("Title",       Required = true)] public partial string Title { get; }
    [Property("CategoryName")]                  public partial string CategoryName { get; }
    [Property("PriceCents",  Required = true)] public partial long PriceCents { get; }
    [Property("InStock",     Required = true)] public partial bool InStock { get; }
    [Property("StockLevel")]                    public partial int StockLevel { get; }

    [ProjectionHandler(typeof(ProductPublishedEvent))]
    private partial Task ApplyAsync(ProductPublishedEvent e);

    [ProjectionHandler(typeof(InventoryAdjustedEvent))]
    private partial Task ApplyAsync(InventoryAdjustedEvent e);
}

The generator emits:

  1. A separate read_model.product_catalog_view table with the projected columns and an index on slug. This table is never written to by command handlers — only by the projection.
  2. A ProductCatalogViewProjection.g.cs host that subscribes to ProductPublishedEvent and InventoryAdjustedEvent from the event bus, looks up the matching row, and applies the developer's hand-written ApplyAsync body.
  3. A ProductCatalogViewRepository.g.cs that exposes only Get and Query methods — no Save, because the read model is write-only via projections.
  4. A ProductCatalogViewRebuildJob.g.cs background job for cold rebuilds. The first time the projection is deployed, this job replays every relevant event from the event store to populate the read model.

The hand-written part is small — the body of ApplyAsync for each event:

private partial async Task ApplyAsync(ProductPublishedEvent e)
{
    var src = await _ctx.Products.AsNoTracking()
        .Include(p => p.Category)
        .FirstAsync(p => p.Id == e.ProductId);

    Slug         = src.Slug;
    Title        = src.Name;
    CategoryName = src.Category.Name;
    PriceCents   = src.Price.Cents;
    // InStock left to InventoryAdjustedEvent
}

Read models give the CMF its scalability headroom. The public storefront's /api/products?q=... endpoint queries read_model.product_catalog_view, not catalog.products, which means it never joins across aggregates and never blocks behind a write transaction. The trade-off is eventual consistency: there is a sub-second delay between a ProductPublishedEvent firing and the catalog view reflecting it. For storefront use cases this is fine; for cases that require strict consistency the application reads from the canonical aggregate instead.

Search Indexing Follows the Same Pattern

A [HasPart("Searchable")] declaration on an aggregate hooks into the same event-projection mechanism. The Stage 4 generator emits an IProductSearchIndexer that subscribes to relevant events and updates a Lucene or Elasticsearch index:

public sealed class ProductSearchIndexer
    : IDomainEventHandler<ProductPublishedEvent>,
      IDomainEventHandler<ProductUpdatedEvent>,
      IDomainEventHandler<ProductDeletedEvent>
{
    private readonly ISearchIndex _index;

    public async Task HandleAsync(ProductPublishedEvent e, CancellationToken ct)
    {
        var src = await _repo.GetAsync(e.ProductId);
        await _index.UpsertAsync(new SearchDocument {
            Id = src.Id.ToString(),
            Title = src.Name,                                  // from [Property]
            Body = src.Description.AsPlainText(),              // [SearchField(Boost = 1.0)]
            Tags = src.Tags.Select(t => t.Name).ToArray(),     // [SearchField(Boost = 0.5)]
            Score = src.Name.StartsWith("Featured") ? 2.0 : 1.0
        });
    }
}

The boost weights come from [SearchField(Boost = 2.0)] on the property declarations. The result is that a developer who renames Description to Details doesn't have to remember to update the search indexer — the next build regenerates it from the current [SearchField] declarations.

The full search flow has three more pieces (/api/search endpoint, Blazor <SearchBox> component, query suggestion API), all generated from the [HasPart("Searchable")] declaration. They are documented in the search section of Part 6.

Hot Paths: ValueTask, Pooled Buffers, AOT-Friendliness

The generators produce code that respects the standard .NET 10 performance idioms by default:

Concern What the generator does
Async hot paths Returns ValueTask<T> instead of Task<T> for handlers whose synchronous path is common (e.g. cache hits, validation failures)
String allocations Uses string.Create for built-up keys; no string.Format in hot loops
JSON serialization Source-generated JsonSerializerContext (one per bounded context) so the WASM bundle has zero reflection JSON
Trim safety All generated code is [RequiresUnreferencedCode]-free; the <IsTrimmable>true</IsTrimmable> on MyStore.Shared is honored
AOT compatibility Generated controllers use RouteHandlerBuilder extension methods that are AOT-friendly; no MethodInfo.Invoke
**Logging Source-generated ILogger extensions (LoggerMessage-style) for every command and query handler, so logging has zero allocation on the hot path

These choices are not optional; the CMF analyzers enforce them. CMF610 flags any generated method that uses string.Format on a code path that could plausibly be hot. CMF611 flags any DI registration that prevents AOT compilation. The defaults are tuned so that turning off the analyzers is the only way to ship un-optimized code.

Performance Budgets in CI

The same cmf report queries table that surfaces query plan costs is consumed by a CI gate:

- name: Enforce query budgets
  run: cmf report queries --fail-on-warning

A pull request that introduces a query exceeding its budget is rejected until either the query is rewritten, the budget is raised (with reviewer approval), or the query is moved to a [Cacheable] or [ProjectionFor] read model. The point is not that any single query is sacred — it is that no query degrades silently. The CMF treats performance regressions the same way it treats type errors: visible at build time, not at 3 a.m.

When Defaults Are Not Enough

Some applications need more than the generators can express. The CMF provides three escape hatches, in increasing order of risk:

  1. Hand-written query handlers. A developer can write a [QueryHandler] partial class and provide the body manually. The generator skips emitting the body but still wires the handler into DI and the cache decorator. This is the most common escape hatch and is used routinely for analytics queries.
  2. Compiled queries. EF Core's EF.CompileAsyncQuery can be applied to any generated query handler by adding [CompileQuery]. The generator then emits the compiled-query call instead of the standard LINQ form, which trades flexibility for sub-millisecond latency.
  3. Raw SQL views. For genuinely complex reporting queries, a developer can declare a [ViewBacked("vw_top_customers")] aggregate that maps to a database view rather than a table. The generator emits a read-only repository with no projection logic; the view itself is maintained by a migration script.

All three are escape hatches, not defaults. The expectation is that 95% of an application's data access flows through generated handlers with generated SQL, and the 5% that needs custom tuning is concentrated in known, named files that are easy to review.

What This Buys

The combination of enforced eager loading, build-time query profiling, event-driven cache invalidation, and read-model projections is enough to make the CMF's default runtime performance comparable to a hand-tuned line-of-business .NET application — and superior to a typical CMS like WordPress or Drupal in the cases that matter (read latency, page TTFB, search response time). The price is build-time complexity: every commit triggers query profiling, which adds 30–60 seconds to CI. The return is that no slow query reaches production unannounced.

The CMF's stance on performance is the same as its stance on security and correctness: declare the constraint, let the generator produce code that satisfies it, and let the analyzer fail the build if anything drifts. A CMS is a long-lived asset; the cost of a slow query discovered six months after release is not the engineer-hour to fix it but the customer trust spent in the interim. The generators are designed to make that interim impossible.

⬇ Download