Why
Part 03 said every verb is one method call into the lib. That method call is _pipeline.RunAsync(request). This part is about what RunAsync actually does.
The naive implementation of a tool like HomeLab would have one fat method per verb. VosUpHandler.HandleAsync would parse the config, validate it, load the plugins, generate the Vagrantfile, write it to disk, call vagrant up, parse the output, return a Result. That works. It also produces six different shapes of fat handler — one per verb — that all share the same five problems:
- Stages are duplicated. Every handler has to validate the config. Every handler has to load plugins. Every handler has to write generated artifacts somewhere. Copy-paste accumulates. The third time you write "deserialise the YAML, run cross-field validation, then run the Ops.Dsl constraints", you have a bug somewhere.
- Stages are entangled. Validation and generation and apply all live in the same method. You cannot test "did the YAML produce the right Vagrantfile" without also calling
vagrant up. You cannot test "would this plan actually work" without committing to it. - Stages cannot be replaced. Want to dry-run? Want to generate without applying? Want to skip validation in a hot-fix? You have to add flags to the fat handler and
if-else around them. Each flag is a new test surface. - Stages cannot publish events without coupling. If logging, audit, and progress reporting all live inside the handler, every handler has to remember to call them. Forgetting is silent.
- Stages cannot be plugged into. Plugins want to inject behaviour at specific points — "after validation but before generation, run my extra check". With a fat handler, plugins cannot do this without monkey-patching.
The thesis of this part is: the pipeline is the spine. Every verb runs the same six stages in the same order. Each stage is one interface, one method, one Result<T>. Stages are individually replaceable, individually testable, individually subscribable. The pipeline itself is ~80 lines of code; the value comes from the discipline it enforces.
The shape
The six stages are:
| # | Stage | Responsibility | Reads | Writes |
|---|---|---|---|---|
| 0 | Validate | Schema + cross-field + Ops.Dsl [MetaConstraint] |
YAML | nothing |
| 1 | Resolve | Load plugins, merge local overrides, hydrate the typed HomeLabConfig |
YAML, plugins | config |
| 2 | Plan | Project the config to an Ops.Dsl IR, build the action DAG | config | plan |
| 3 | Generate | Emit Packer HCL, Vagrantfile, compose YAML, Traefik YAML, certs | plan | files in ./out |
| 4 | Apply | Call the binary wrappers in DAG order | files | side effects |
| 5 | Verify | Run Ops.Observability probes against the live state | side effects | report |
The pipeline runs them in order. Every stage receives the output of the previous stage and adds its own. If any stage returns a failure, the pipeline short-circuits and returns that failure. The pipeline does not catch exceptions — exceptions are bugs and bubble up uncaught. Errors that are not bugs (the YAML is invalid, the binary returned non-zero, the probe failed) flow as Result<T> failures and are first-class.
public interface IHomeLabStage
{
string Name { get; }
int Order { get; }
Task<Result<HomeLabContext>> RunAsync(HomeLabContext ctx, CancellationToken ct);
}
public sealed record HomeLabContext(
HomeLabRequest Request,
HomeLabConfig? Config = null,
HomeLabPlan? Plan = null,
GeneratedArtifacts? Artifacts = null,
AppliedActions? Applied = null,
VerificationReport? Verification = null,
IReadOnlyList<string>? Warnings = null);public interface IHomeLabStage
{
string Name { get; }
int Order { get; }
Task<Result<HomeLabContext>> RunAsync(HomeLabContext ctx, CancellationToken ct);
}
public sealed record HomeLabContext(
HomeLabRequest Request,
HomeLabConfig? Config = null,
HomeLabPlan? Plan = null,
GeneratedArtifacts? Artifacts = null,
AppliedActions? Applied = null,
VerificationReport? Verification = null,
IReadOnlyList<string>? Warnings = null);The context is immutable (record). Each stage receives a context and returns a new one with one more field populated. By the time the pipeline finishes, the context has a value in every slot. By the time the pipeline finishes successfully, the lab is up.
The pipeline itself:
[Injectable(ServiceLifetime.Singleton)]
public sealed class HomeLabPipeline : IHomeLabPipeline
{
private readonly IReadOnlyList<IHomeLabStage> _stages;
private readonly IHomeLabEventBus _events;
private readonly IClock _clock;
public HomeLabPipeline(
IEnumerable<IHomeLabStage> stages,
IHomeLabEventBus events,
IClock clock)
{
_stages = stages.OrderBy(s => s.Order).ToList();
_events = events;
_clock = clock;
}
public async Task<Result<HomeLabContext>> RunAsync(
HomeLabRequest request,
CancellationToken ct = default)
{
var ctx = new HomeLabContext(request);
await _events.PublishAsync(new PipelineStarted(request, _clock.UtcNow), ct);
foreach (var stage in _stages)
{
await _events.PublishAsync(new StageStarted(stage.Name, _clock.UtcNow), ct);
var stopwatch = Stopwatch.StartNew();
var result = await stage.RunAsync(ctx, ct);
stopwatch.Stop();
if (result.IsFailure)
{
await _events.PublishAsync(new StageFailed(stage.Name, result.Errors, stopwatch.Elapsed, _clock.UtcNow), ct);
await _events.PublishAsync(new PipelineFailed(request, stage.Name, result.Errors, _clock.UtcNow), ct);
return result;
}
ctx = result.Value;
await _events.PublishAsync(new StageCompleted(stage.Name, stopwatch.Elapsed, _clock.UtcNow), ct);
}
await _events.PublishAsync(new PipelineCompleted(request, _clock.UtcNow), ct);
return Result.Success(ctx);
}
}[Injectable(ServiceLifetime.Singleton)]
public sealed class HomeLabPipeline : IHomeLabPipeline
{
private readonly IReadOnlyList<IHomeLabStage> _stages;
private readonly IHomeLabEventBus _events;
private readonly IClock _clock;
public HomeLabPipeline(
IEnumerable<IHomeLabStage> stages,
IHomeLabEventBus events,
IClock clock)
{
_stages = stages.OrderBy(s => s.Order).ToList();
_events = events;
_clock = clock;
}
public async Task<Result<HomeLabContext>> RunAsync(
HomeLabRequest request,
CancellationToken ct = default)
{
var ctx = new HomeLabContext(request);
await _events.PublishAsync(new PipelineStarted(request, _clock.UtcNow), ct);
foreach (var stage in _stages)
{
await _events.PublishAsync(new StageStarted(stage.Name, _clock.UtcNow), ct);
var stopwatch = Stopwatch.StartNew();
var result = await stage.RunAsync(ctx, ct);
stopwatch.Stop();
if (result.IsFailure)
{
await _events.PublishAsync(new StageFailed(stage.Name, result.Errors, stopwatch.Elapsed, _clock.UtcNow), ct);
await _events.PublishAsync(new PipelineFailed(request, stage.Name, result.Errors, _clock.UtcNow), ct);
return result;
}
ctx = result.Value;
await _events.PublishAsync(new StageCompleted(stage.Name, stopwatch.Elapsed, _clock.UtcNow), ct);
}
await _events.PublishAsync(new PipelineCompleted(request, _clock.UtcNow), ct);
return Result.Success(ctx);
}
}Eighty lines, give or take. It does exactly four things: orders the stages, runs them in sequence, publishes events around each one, and short-circuits on failure. The stage list comes from DI — every stage is [Injectable] and self-registers. Adding a new stage is one new class with [Injectable] and an Order value.
The wiring
Each stage is one class. Here is what Stage 0: Validate looks like:
[Injectable(ServiceLifetime.Singleton)]
public sealed class ValidateStage : IHomeLabStage
{
public string Name => "validate";
public int Order => 0;
private readonly IHomeLabConfigLoader _loader;
private readonly IEnumerable<IHomeLabConfigValidator> _validators;
private readonly IMetaConstraintRunner _opsDslConstraints;
public ValidateStage(
IHomeLabConfigLoader loader,
IEnumerable<IHomeLabConfigValidator> validators,
IMetaConstraintRunner opsDslConstraints)
{
_loader = loader;
_validators = validators;
_opsDslConstraints = opsDslConstraints;
}
public async Task<Result<HomeLabContext>> RunAsync(HomeLabContext ctx, CancellationToken ct)
{
var configResult = await _loader.LoadAsync(ctx.Request.ConfigPath, ct);
if (configResult.IsFailure) return configResult.Map<HomeLabContext>();
var config = configResult.Value;
var errors = new List<string>();
foreach (var v in _validators)
{
var r = v.Validate(config);
if (r.IsFailure) errors.AddRange(r.Errors);
}
errors.AddRange(_opsDslConstraints.Validate(config));
if (errors.Count > 0)
return Result.Failure<HomeLabContext>(string.Join("\n", errors));
return Result.Success(ctx with { Config = config });
}
}[Injectable(ServiceLifetime.Singleton)]
public sealed class ValidateStage : IHomeLabStage
{
public string Name => "validate";
public int Order => 0;
private readonly IHomeLabConfigLoader _loader;
private readonly IEnumerable<IHomeLabConfigValidator> _validators;
private readonly IMetaConstraintRunner _opsDslConstraints;
public ValidateStage(
IHomeLabConfigLoader loader,
IEnumerable<IHomeLabConfigValidator> validators,
IMetaConstraintRunner opsDslConstraints)
{
_loader = loader;
_validators = validators;
_opsDslConstraints = opsDslConstraints;
}
public async Task<Result<HomeLabContext>> RunAsync(HomeLabContext ctx, CancellationToken ct)
{
var configResult = await _loader.LoadAsync(ctx.Request.ConfigPath, ct);
if (configResult.IsFailure) return configResult.Map<HomeLabContext>();
var config = configResult.Value;
var errors = new List<string>();
foreach (var v in _validators)
{
var r = v.Validate(config);
if (r.IsFailure) errors.AddRange(r.Errors);
}
errors.AddRange(_opsDslConstraints.Validate(config));
if (errors.Count > 0)
return Result.Failure<HomeLabContext>(string.Join("\n", errors));
return Result.Success(ctx with { Config = config });
}
}Stage 2: Plan is where Ops.Dsl really earns its keep:
[Injectable(ServiceLifetime.Singleton)]
public sealed class PlanStage : IHomeLabStage
{
public string Name => "plan";
public int Order => 2;
private readonly IPlanProjector _projector;
private readonly ITopologyResolver _topology;
private readonly IDagBuilder _dagBuilder;
// ...
public async Task<Result<HomeLabContext>> RunAsync(HomeLabContext ctx, CancellationToken ct)
{
var config = ctx.Config!;
// 1. Project HomeLab config → Ops.Dsl IR
var ir = _projector.Project(config);
// 2. Resolve topology (single | multi | ha) into a concrete VM list
var vms = _topology.Resolve(config.Topology, config);
// 3. Build the action DAG (which actions depend on which)
var dagResult = _dagBuilder.Build(ir, vms);
if (dagResult.IsFailure) return dagResult.Map<HomeLabContext>();
var plan = new HomeLabPlan(IR: ir, Machines: vms, Dag: dagResult.Value);
return Result.Success(ctx with { Plan = plan });
}
}[Injectable(ServiceLifetime.Singleton)]
public sealed class PlanStage : IHomeLabStage
{
public string Name => "plan";
public int Order => 2;
private readonly IPlanProjector _projector;
private readonly ITopologyResolver _topology;
private readonly IDagBuilder _dagBuilder;
// ...
public async Task<Result<HomeLabContext>> RunAsync(HomeLabContext ctx, CancellationToken ct)
{
var config = ctx.Config!;
// 1. Project HomeLab config → Ops.Dsl IR
var ir = _projector.Project(config);
// 2. Resolve topology (single | multi | ha) into a concrete VM list
var vms = _topology.Resolve(config.Topology, config);
// 3. Build the action DAG (which actions depend on which)
var dagResult = _dagBuilder.Build(ir, vms);
if (dagResult.IsFailure) return dagResult.Map<HomeLabContext>();
var plan = new HomeLabPlan(IR: ir, Machines: vms, Dag: dagResult.Value);
return Result.Success(ctx with { Plan = plan });
}
}The IPlanProjector, ITopologyResolver, and IDagBuilder are themselves [Injectable] services. The plan stage coordinates them; it does not contain their logic. This is the SOLID rule from Part 08: every class is small, every class has one responsibility, every class is composable.
Stage 3: Generate walks the plan and asks each contributor to emit its files:
[Injectable(ServiceLifetime.Singleton)]
public sealed class GenerateStage : IHomeLabStage
{
public string Name => "generate";
public int Order => 3;
private readonly IEnumerable<IPackerBundleContributor> _packerContributors;
private readonly IEnumerable<IComposeFileContributor> _composeContributors;
private readonly IEnumerable<ITraefikContributor> _traefikContributors;
private readonly IBundleWriter _writer;
public async Task<Result<HomeLabContext>> RunAsync(HomeLabContext ctx, CancellationToken ct)
{
var plan = ctx.Plan!;
// Each contributor adds its piece to a shared bundle.
var packer = new PackerBundle(); _packerContributors.Apply(packer);
var compose = new ComposeFile(); _composeContributors.Apply(compose);
var traefik = new TraefikDynamicConfig(); _traefikContributors.Apply(traefik);
// The writer renders the bundles to disk via the typed serializers.
var packerOut = await _writer.WritePackerAsync(packer, ctx.Request.OutputDir, ct);
var composeOut = await _writer.WriteComposeAsync(compose, ctx.Request.OutputDir, ct);
var traefikOut = await _writer.WriteTraefikAsync(traefik, ctx.Request.OutputDir, ct);
var artifacts = new GeneratedArtifacts(packerOut, composeOut, traefikOut);
return Result.Success(ctx with { Artifacts = artifacts });
}
}[Injectable(ServiceLifetime.Singleton)]
public sealed class GenerateStage : IHomeLabStage
{
public string Name => "generate";
public int Order => 3;
private readonly IEnumerable<IPackerBundleContributor> _packerContributors;
private readonly IEnumerable<IComposeFileContributor> _composeContributors;
private readonly IEnumerable<ITraefikContributor> _traefikContributors;
private readonly IBundleWriter _writer;
public async Task<Result<HomeLabContext>> RunAsync(HomeLabContext ctx, CancellationToken ct)
{
var plan = ctx.Plan!;
// Each contributor adds its piece to a shared bundle.
var packer = new PackerBundle(); _packerContributors.Apply(packer);
var compose = new ComposeFile(); _composeContributors.Apply(compose);
var traefik = new TraefikDynamicConfig(); _traefikContributors.Apply(traefik);
// The writer renders the bundles to disk via the typed serializers.
var packerOut = await _writer.WritePackerAsync(packer, ctx.Request.OutputDir, ct);
var composeOut = await _writer.WriteComposeAsync(compose, ctx.Request.OutputDir, ct);
var traefikOut = await _writer.WriteTraefikAsync(traefik, ctx.Request.OutputDir, ct);
var artifacts = new GeneratedArtifacts(packerOut, composeOut, traefikOut);
return Result.Success(ctx with { Artifacts = artifacts });
}
}Notice the contributor pattern: _packerContributors.Apply(packer) walks every registered contributor and lets each one mutate the shared bundle. This is the same pattern Vos and Packer.Bundle already use. Part 32 is the deep dive on the compose contributor specifically; the principle is the same for all of them.
Stage 4: Apply is where binary wrappers run. Every wrapper is [Injectable], every wrapper returns Result<T>, every wrapper is generated from --help output via [BinaryWrapper("packer")]:
[Injectable(ServiceLifetime.Singleton)]
public sealed class ApplyStage : IHomeLabStage
{
public string Name => "apply";
public int Order => 4;
private readonly IPackerClient _packer;
private readonly IVagrantClient _vagrant;
private readonly IDockerComposeClient _compose;
private readonly IHomeLabEventBus _events;
public async Task<Result<HomeLabContext>> RunAsync(HomeLabContext ctx, CancellationToken ct)
{
var applied = new List<AppliedAction>();
foreach (var action in ctx.Plan!.Dag.TopologicalOrder)
{
var result = action.Kind switch
{
ActionKind.PackerBuild => await _packer.BuildAsync(action.WorkingDir, ct),
ActionKind.VagrantUp => await _vagrant.UpAsync(action.MachineName, ct),
ActionKind.ComposeUp => await _compose.UpAsync(action.ComposeFile, ct),
_ => Result.Failure<AppliedAction>($"unknown action {action.Kind}")
};
if (result.IsFailure) return result.Map<HomeLabContext>();
applied.Add(result.Value);
await _events.PublishAsync(new ActionApplied(action, result.Value), ct);
}
return Result.Success(ctx with { Applied = new AppliedActions(applied) });
}
}[Injectable(ServiceLifetime.Singleton)]
public sealed class ApplyStage : IHomeLabStage
{
public string Name => "apply";
public int Order => 4;
private readonly IPackerClient _packer;
private readonly IVagrantClient _vagrant;
private readonly IDockerComposeClient _compose;
private readonly IHomeLabEventBus _events;
public async Task<Result<HomeLabContext>> RunAsync(HomeLabContext ctx, CancellationToken ct)
{
var applied = new List<AppliedAction>();
foreach (var action in ctx.Plan!.Dag.TopologicalOrder)
{
var result = action.Kind switch
{
ActionKind.PackerBuild => await _packer.BuildAsync(action.WorkingDir, ct),
ActionKind.VagrantUp => await _vagrant.UpAsync(action.MachineName, ct),
ActionKind.ComposeUp => await _compose.UpAsync(action.ComposeFile, ct),
_ => Result.Failure<AppliedAction>($"unknown action {action.Kind}")
};
if (result.IsFailure) return result.Map<HomeLabContext>();
applied.Add(result.Value);
await _events.PublishAsync(new ActionApplied(action, result.Value), ct);
}
return Result.Success(ctx with { Applied = new AppliedActions(applied) });
}
}Stage 5: Verify runs Ops.Observability probes against the live lab. We will see this in detail in Part 13.
The test
Each stage is unit-testable in isolation:
[Fact]
public async Task validate_stage_returns_failure_when_topology_is_unknown()
{
var loader = new FakeConfigLoader(new HomeLabConfig { Name = "x", Topology = "k8s" });
var validators = new[] { new TopologyValidator() };
var ops = new NoopMetaConstraintRunner();
var stage = new ValidateStage(loader, validators, ops);
var ctx = new HomeLabContext(new TestRequest());
var result = await stage.RunAsync(ctx, CancellationToken.None);
result.IsFailure.Should().BeTrue();
result.Errors.Should().Contain(e => e.Contains("k8s"));
}
[Fact]
public async Task pipeline_short_circuits_at_first_failure()
{
var failingStage = new FakeStage("fail-me", order: 1, returnFailure: true);
var laterStage = new FakeStage("never-runs", order: 2, returnFailure: false);
var bus = new RecordingEventBus();
var pipeline = new HomeLabPipeline(
new IHomeLabStage[] { failingStage, laterStage },
bus,
new FakeClock(DateTimeOffset.UtcNow));
var result = await pipeline.RunAsync(new TestRequest(), CancellationToken.None);
result.IsFailure.Should().BeTrue();
laterStage.WasInvoked.Should().BeFalse();
bus.Recorded.Should().Contain(e => e is StageFailed { StageName: "fail-me" });
bus.Recorded.Should().Contain(e => e is PipelineFailed);
}
[Fact]
public async Task generate_stage_writes_packer_compose_and_traefik_in_one_pass()
{
var fs = new MockFileSystem();
var ctx = new HomeLabContext(new TestRequest(), Plan: new HomeLabPlan(/* ... */));
var stage = new GenerateStage(
packerContributors: new[] { new FakePackerContributor() },
composeContributors: new[] { new FakeComposeContributor() },
traefikContributors: new[] { new FakeTraefikContributor() },
writer: new BundleWriter(fs));
var result = await stage.RunAsync(ctx, CancellationToken.None);
result.IsSuccess.Should().BeTrue();
fs.AllFiles.Should().Contain(f => f.EndsWith(".pkr.hcl"));
fs.AllFiles.Should().Contain(f => f.EndsWith("docker-compose.yaml"));
fs.AllFiles.Should().Contain(f => f.EndsWith("dynamic.yaml"));
}[Fact]
public async Task validate_stage_returns_failure_when_topology_is_unknown()
{
var loader = new FakeConfigLoader(new HomeLabConfig { Name = "x", Topology = "k8s" });
var validators = new[] { new TopologyValidator() };
var ops = new NoopMetaConstraintRunner();
var stage = new ValidateStage(loader, validators, ops);
var ctx = new HomeLabContext(new TestRequest());
var result = await stage.RunAsync(ctx, CancellationToken.None);
result.IsFailure.Should().BeTrue();
result.Errors.Should().Contain(e => e.Contains("k8s"));
}
[Fact]
public async Task pipeline_short_circuits_at_first_failure()
{
var failingStage = new FakeStage("fail-me", order: 1, returnFailure: true);
var laterStage = new FakeStage("never-runs", order: 2, returnFailure: false);
var bus = new RecordingEventBus();
var pipeline = new HomeLabPipeline(
new IHomeLabStage[] { failingStage, laterStage },
bus,
new FakeClock(DateTimeOffset.UtcNow));
var result = await pipeline.RunAsync(new TestRequest(), CancellationToken.None);
result.IsFailure.Should().BeTrue();
laterStage.WasInvoked.Should().BeFalse();
bus.Recorded.Should().Contain(e => e is StageFailed { StageName: "fail-me" });
bus.Recorded.Should().Contain(e => e is PipelineFailed);
}
[Fact]
public async Task generate_stage_writes_packer_compose_and_traefik_in_one_pass()
{
var fs = new MockFileSystem();
var ctx = new HomeLabContext(new TestRequest(), Plan: new HomeLabPlan(/* ... */));
var stage = new GenerateStage(
packerContributors: new[] { new FakePackerContributor() },
composeContributors: new[] { new FakeComposeContributor() },
traefikContributors: new[] { new FakeTraefikContributor() },
writer: new BundleWriter(fs));
var result = await stage.RunAsync(ctx, CancellationToken.None);
result.IsSuccess.Should().BeTrue();
fs.AllFiles.Should().Contain(f => f.EndsWith(".pkr.hcl"));
fs.AllFiles.Should().Contain(f => f.EndsWith("docker-compose.yaml"));
fs.AllFiles.Should().Contain(f => f.EndsWith("dynamic.yaml"));
}The pipeline test (the second one) is the most important. It proves the spine works: order is respected, failure short-circuits, events are published. Once that test passes, every new stage you add gets the same behaviour for free.
What this gives you that bash doesn't
A bash script is a top-down sequence of commands. There is no separation between deciding what to do and doing it. There is no separation between generating a file and applying its effects. There is no place to plug in "do this extra thing between step 3 and step 4" without editing the script. There is no event surface — set -x is not events.
A six-stage pipeline with Result<T> and [Injectable] stages gives you, for the same surface area:
- Dry-run for free: stop the pipeline after Stage 3 (Generate) and you have every artifact on disk without any side effects.
homelab planis one CLI flag. - Replay for free: run Stage 5 (Verify) against an existing lab without re-applying.
homelab verifyis one CLI flag. - Plug-in points: a plugin can register its own stage at any
Ordervalue. The pipeline picks it up automatically. - Event-driven progress: subscribers see
StageStarted/StageCompletedand render whatever they want. The pipeline does not care. - Per-stage tests: each stage can be unit-tested in 5 milliseconds with fakes, instead of E2E-tested in 5 minutes with real binaries.
- Failure isolation: when something breaks, the event log tells you exactly which stage and how long it took.
The bargain pays back the first time you want to dry-run, the first time you want to replay verification, and the first time a plugin needs to inject behaviour into the middle of the flow.