Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Part 12: Ops.Dsl as the Substrate — M3, Shared Primitives, the Fixed Point

"You cannot type operations without a vocabulary for operations. HomeLab does not invent that vocabulary. It uses the one Ops.Dsl provides."


Why

HomeLab is, fundamentally, a tool that takes a typed declaration of an environment and produces a running environment. The "typed declaration" is not just HomeLabConfig — that is the YAML user surface. Behind it, the meaning of every field is expressed in a shared vocabulary that other parts of the FrenchExDev ecosystem also use: a ContainerSpec is a ContainerSpec whether it appears in a HomeLab compose contributor or in a Kubernetes manifest emitted by a future cloud-side tool. A BackupPolicy is a BackupPolicy whether it appears in HomeLab's restic provider or in an Ops.DataGovernance audit report.

Inventing this vocabulary inside HomeLab would be a mistake. It would couple HomeLab's lifetime to its own definitions of "deployment" and "alert rule" and "backup retention", which are domain concepts that exist far beyond HomeLab. Worse, it would prevent HomeLab from being composed with other Ops-aware tools that might emerge later — a future K8s.Dsl, a cloud-side Ops.Cloud, an audit-only Ops.Compliance analyzer.

The right move is to use the vocabulary that already exists: Ops.Dsl. Ops.Dsl is described in detail in the Ops DSL Ecosystem series; it is itself a spec-only design at the time of writing, just like HomeLab. The two specs are co-designed. They are written to compile against each other. HomeLab is the first consumer of Ops.Dsl. Ops.Dsl is the substrate that lets HomeLab be more than a YAML emitter.

This part explains how the substrate works: what Ops.Dsl provides, what HomeLab consumes, how the M3 fixed point makes it all hang together, and why HomeLab plugins can extend the same vocabulary without forking either library.


The four-layer model

Ops.Dsl is built on FrenchExDev.Net.Dsl, the M3 meta-metamodel framework. The model has four layers, lifted directly from OMG MOF:

M3   FrenchExDev.Net.Dsl     5 primitives: MetaConcept, MetaProperty, MetaReference,
                                           MetaConstraint, MetaInherits
       (the meta-metamodel)
                ▲
                │ defines
                │
M2   Ops.Dsl              [DeploymentOrchestrator], [HealthCheck], [BackupPolicy],
                          [CircuitBreaker], [SecretRotation], ... (~80 attributes)
       (the DSL)
                ▲
                │ uses
                │
M1   HomeLab user code    [DeploymentOrchestrator("devlab")] class DevLabDeployment
                          [HealthCheck("/api/v4/version")] class GitLabHealth
       (the model)
                ▲
                │ produces
                │
M0   Runtime             devLabDeployment.IsHealthy = true
                         gitLabHealth.LastChecked = 2026-04-08T12:00:00Z
       (the instances)

M3 is the fixed point. The 5 primitives in FrenchExDev.Net.Dsl are sufficient to describe any DSL, including themselves. MetaConceptAttribute is itself decorated with [MetaConcept(typeof(MetaConceptConcept))]. There is no M4. The chain stops.

M2 is Ops.Dsl. Every Ops.Dsl attribute ([Deployment], [HealthCheck], [BackupPolicy], etc.) is a class with [MetaConcept] on it. The M2 author writes attributes; the M3 framework knows how to discover, validate, and reason about them.

M1 is HomeLab user code. HomeLab itself contributes M1 declarations: when a user runs homelab vos up, HomeLab generates an in-memory M1 graph that says "this lab has these aggregates, these services, these health checks, these backup policies".

M0 is the runtime state of the actual lab: which VMs are up, which services are healthy, which certs expire when, which backups passed last night.

This is exactly the same chain Diem, Ddd, Workflow, Content, and Requirements already use. HomeLab is the first operational M2 user, and it uses the same M3 framework everything else does.


The 8 shared primitives Ops.Dsl exports

Per the Ops DSL Ecosystem — Part 4, Ops.Dsl exports a small kernel of primitives that every sub-DSL builds on. These eight types are the vocabulary HomeLab consumes. Memorise them; they show up in every subsequent part of the series.

OpsTarget

A reference to a thing the system operates on: a service, a host, a container, a database, a queue. Every other primitive is about an OpsTarget.

public sealed record OpsTarget(string Kind, string Identifier, IReadOnlyDictionary<string, string> Tags);

In HomeLab: every Service aggregate produces an OpsTarget with Kind = "service", Identifier = "gitlab", Tags = { topology=multi, vm=platform }.

OpsProbe

A way to check if a target is healthy. Probes have a kind (http, tcp, script, dotnet), a definition, and a cadence.

public sealed record OpsProbe(string Kind, string Definition, TimeSpan Interval, TimeSpan Timeout);

In HomeLab: every [HealthCheck] declaration becomes an OpsProbe. The Verify stage of the pipeline runs them.

OpsThreshold

A numeric or boolean condition that triggers something. Thresholds have a metric, a comparator, a value, and a window.

public sealed record OpsThreshold(string Metric, string Comparator, double Value, TimeSpan Window);

In HomeLab: cert-expiry warnings, backup-age warnings, observability alert rules.

OpsPolicy

A rule that governs how something happens. Polly retry policies, circuit breakers, rate limits, deployment strategies. Polymorphic over kind.

public abstract record OpsPolicy(string Kind);
public sealed record RetryPolicy(int MaxAttempts, TimeSpan Backoff) : OpsPolicy("retry");
public sealed record CircuitBreakerPolicy(int FailureThreshold, TimeSpan Cooldown) : OpsPolicy("circuit-breaker");

In HomeLab: applied to every binary-wrapper call (packer build, vagrant up, docker compose up) via [InjectableDecorator].

OpsEnvironment

A named environment that targets live in. dev, prod, ha-stage. HomeLab's three topologies map directly here.

public sealed record OpsEnvironment(string Name, OpsExecutionTier Tier, IReadOnlyDictionary<string, string> Properties);

OpsSchedule

A cron-like declaration of when something happens. Backups, cert rotations, periodic restore tests.

public sealed record OpsSchedule(string Cron, string Timezone);

OpsExecutionTier

The 3-tier model from the Ops DSL Ecosystem series: InProcess, Container, Cloud. HomeLab is exclusively a Container-tier consumer (everything happens via Vagrant + Docker), but the vocabulary is the same one a future cloud-side tool would use.

public enum OpsExecutionTier { InProcess, Container, Cloud }

A typed link from an operational concept back to a Requirements DSL Feature. HomeLab features are traced via [ForRequirement]; the link is OpsRequirementLink.

public sealed record OpsRequirementLink(Type FeatureType, string AcceptanceCriterionMethod);

How HomeLab projects to Ops.Dsl

Stage 2 of the pipeline (Plan) is the projection step. It takes the typed HomeLabConfig and produces an intermediate representation (IR) expressed entirely in Ops.Dsl primitives:

[Injectable(ServiceLifetime.Singleton)]
public sealed class HomeLabPlanProjector : IPlanProjector
{
    private readonly IEnumerable<IComposeFileContributor> _composeContributors;
    // ...

    public OpsDslIR Project(HomeLabConfig config)
    {
        var targets = new List<OpsTarget>();
        var probes = new List<(OpsTarget, OpsProbe)>();
        var policies = new List<(OpsTarget, OpsPolicy)>();
        var schedules = new List<(OpsTarget, OpsSchedule)>();

        // 1. Each compose service becomes an OpsTarget
        var compose = new ComposeFile();
        _composeContributors.Apply(compose);
        foreach (var svc in compose.Services)
        {
            var target = new OpsTarget(
                Kind: "service",
                Identifier: svc.Name,
                Tags: new Dictionary<string, string>
                {
                    ["topology"] = config.Topology,
                    ["engine"]   = config.Engine,
                });
            targets.Add(target);

            // 2. The service's healthcheck becomes an OpsProbe
            if (svc.HealthCheck is { } hc)
            {
                probes.Add((target, new OpsProbe(
                    Kind: "http",
                    Definition: hc.Test.First(),
                    Interval: hc.Interval ?? TimeSpan.FromSeconds(30),
                    Timeout: hc.Timeout ?? TimeSpan.FromSeconds(5))));
            }

            // 3. Restart policies become OpsPolicy
            if (svc.Restart is { } restart)
            {
                policies.Add((target, restart switch
                {
                    "always" => new RetryPolicy(MaxAttempts: int.MaxValue, Backoff: TimeSpan.FromSeconds(5)),
                    "on-failure" => new RetryPolicy(MaxAttempts: 5, Backoff: TimeSpan.FromSeconds(10)),
                    _ => throw new InvalidOperationException()
                }));
            }
        }

        // 4. Backup configurations become OpsSchedules
        foreach (var backup in config.Backup.Policies)
        {
            var target = targets.Single(t => t.Identifier == backup.TargetService);
            schedules.Add((target, new OpsSchedule(backup.Cron, backup.Timezone)));
        }

        return new OpsDslIR(targets, probes, policies, schedules);
    }
}

The output of this projector is an OpsDslIR — a fully typed, fully validated Ops.Dsl representation of the entire lab. Every subsequent stage operates on this IR. The Generate stage walks it to emit artifacts. The Apply stage walks it to call binaries. The Verify stage walks the probes to check the live state.

This is the moment HomeLab stops being "a YAML emitter with extra steps" and starts being a typed operational tool. Everything downstream of Plan works on a structure that the compiler understands. Renaming a service is one find/replace in a [Service] attribute, and the IR follows. Adding a probe to a service is one new attribute, and the IR picks it up. Adding a backup policy is one Ops.Dsl [BackupPolicy] attribute, and the IR projects it into a schedule.


Why M3 matters: the metamodel registry

The 5 M3 primitives let FrenchExDev.Net.Dsl build a global MetamodelRegistry. Every [MetaConcept] attribute discovered in any loaded assembly is registered in this dictionary. This includes:

  • Built-in Ops.Dsl concepts (DeploymentOrchestratorConcept, HealthCheckConcept, etc.)
  • HomeLab-specific concepts (LabConcept, MachineConcept, CertConcept, DnsEntryConcept)
  • Plugin-specific concepts (CloudflareDnsProviderConcept, BitwardenSecretStoreConcept)

The registry is built once at startup, after all plugins have loaded. After that, any code that wants to reason about the metamodel — analyzers, validators, code generators, documentation tools — can query it:

var registry = sp.GetRequiredService<IMetamodelRegistry>();

// "Show me every concept that has a [BackupPolicy]"
var backupTargets = registry.Concepts
    .Where(c => c.HasReferenceTo(typeof(BackupPolicyConcept)))
    .ToList();

// "Validate every M1 instance against its concept's [MetaConstraint]s"
foreach (var instance in lab.AllAggregates())
{
    var concept = registry.GetConcept(instance.GetType());
    var validation = concept.Validate(instance);
    if (validation.IsFailure) /* ... */
}

This is what makes plugin extensibility meaningful. A plugin that adds a new sub-DSL (e.g. Ops.HomeRouter for managing the team's home router config) registers its concepts in the same registry. HomeLab then validates those concepts the same way it validates built-in ones. The plugin does not need to ship its own validator — the M3 framework does the work.


The wiring

Ops.Dsl is loaded the same way the toolbelt is loaded: as a NuGet dependency, registered via [Injectable]. HomeLab's lib has:

<PackageReference Include="FrenchExDev.Net.Dsl" Version="1.*" />
<PackageReference Include="FrenchExDev.Net.Ops.Dsl" Version="1.*" />
<PackageReference Include="FrenchExDev.Net.Ops.Dsl.Infrastructure" Version="1.*" />
<PackageReference Include="FrenchExDev.Net.Ops.Dsl.Deployment" Version="1.*" />
<PackageReference Include="FrenchExDev.Net.Ops.Dsl.Observability" Version="1.*" />
<PackageReference Include="FrenchExDev.Net.Ops.Dsl.Configuration" Version="1.*" />
<PackageReference Include="FrenchExDev.Net.Ops.Dsl.Resilience" Version="1.*" />
<PackageReference Include="FrenchExDev.Net.Ops.Dsl.Security" Version="1.*" />
<PackageReference Include="FrenchExDev.Net.Ops.Dsl.Networking" Version="1.*" />
<PackageReference Include="FrenchExDev.Net.Ops.Dsl.DataGovernance" Version="1.*" />

Eight Ops.Dsl sub-DSL packages plus the kernel. Each one self-registers via [Injectable]. The metamodel registry picks them up automatically. HomeLab's IPlanProjector (which we saw above) uses them to project the YAML config into a typed IR.

Note the version pinning is generous (1.*). Both libraries are co-designed, both will reach v1 together. Until then, the spec-only nature of both means the dependency is conceptual, not literal.


The test

[Fact]
public void plan_projector_emits_ops_dsl_target_for_every_service()
{
    var config = new HomeLabConfig
    {
        Name = "test",
        Topology = "single",
        Engine = "docker",
        // ... compose with gitlab + postgres + traefik
    };
    var contributors = new IComposeFileContributor[]
    {
        new GitLabComposeContributor(),
        new PostgresComposeContributor(),
        new TraefikComposeContributor()
    };
    var projector = new HomeLabPlanProjector(contributors);

    var ir = projector.Project(config);

    ir.Targets.Should().HaveCount(3);
    ir.Targets.Select(t => t.Identifier).Should().Contain("gitlab", "postgres", "traefik");
    ir.Targets.Should().OnlyContain(t => t.Tags["topology"] == "single");
    ir.Targets.Should().OnlyContain(t => t.Tags["engine"] == "docker");
}

[Fact]
public void plan_projector_emits_ops_probe_for_service_with_healthcheck()
{
    var config = new HomeLabConfig { Name = "x", Topology = "single", Engine = "docker" };
    var contributors = new[] { new GitLabComposeContributor() }; // emits a healthcheck
    var projector = new HomeLabPlanProjector(contributors);

    var ir = projector.Project(config);

    ir.Probes.Should().HaveCount(1);
    var probe = ir.Probes.Single();
    probe.Item2.Kind.Should().Be("http");
    probe.Item2.Interval.Should().Be(TimeSpan.FromSeconds(30));
}

[Fact]
public void metamodel_registry_includes_homelab_concepts_after_startup()
{
    var sp = new ServiceCollection().AddHomeLab().BuildServiceProvider();
    var registry = sp.GetRequiredService<IMetamodelRegistry>();

    registry.Concepts.Should().Contain(c => c.Name == "Lab");
    registry.Concepts.Should().Contain(c => c.Name == "Machine");
    registry.Concepts.Should().Contain(c => c.Name == "Cert");
    registry.Concepts.Should().Contain(c => c.Name == "DeploymentOrchestrator"); // from Ops.Dsl
}

What this gives you that bash doesn't

Bash has no metamodel. Every script is its own world. When you write BACKUP_TARGET=gitlab in one script and target=gitlab in another and service=gitlab-omnibus in a third, they are all strings, and there is no way to ask "are these the same concept". There is no registry. There is no type. There is no analyzer. There is the convention you tried to remember and the comment you wrote a year ago.

Ops.Dsl as the substrate gives you, for the same surface area:

  • A shared vocabulary (the 8 primitives) used by HomeLab, by future K8s.Dsl, by every Ops.Dsl sub-DSL, by every plugin
  • A typed projection from YAML config → Ops.Dsl IR, computed once and consumed by every subsequent stage
  • A metamodel registry that lets analyzers, validators, doc generators, and plugins reason about concepts uniformly
  • A fixed point at M3 that prevents the framework from drifting into M4 territory (which is where DSL ecosystems go to die)
  • Plugin extensibility at the concept level, not just at the contributor level

The bargain pays back the moment you write a sub-DSL plugin and it just works with every analyzer that already exists. The metamodel takes the work; you describe the concept, and the framework knows what to do with it.


⬇ Download