Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Part 08: SOLID and DRY in Practice — Interfaces, Not Folklore

"SOLID is not folklore. It is five constraints, each of which has a concrete shape, each of which has a concrete cost, each of which has a concrete payoff. If you cannot point at the line of code where the constraint is enforced, you are not doing SOLID — you are repeating slogans."


Why

SOLID is the most cited and least applied set of principles in object-oriented programming. Most developers can recite the acronym. Most developers cannot point at a single class in their codebase that demonstrably satisfies all five letters at once. Worse, most discussions of SOLID are abstract: a Shape class, a Rectangle, an Animal that MakeSounds. None of those examples come from real software, and none of them survive contact with a real test suite.

This part is the opposite. It walks through the HomeLab lib and points at the exact lines where each SOLID letter is enforced. Then it does the same for DRY. The goal is not to convince you that HomeLab is SOLID — the goal is to give you a concrete vocabulary you can apply to your own code, so that "is this SRP" stops being a vibe and starts being a checkbox.


S — Single Responsibility Principle

"A class should have one, and only one, reason to change."

In HomeLab, the unit of single responsibility is the stage. Each pipeline stage has exactly one job:

  • ValidateStage validates. It does not load files (that's IHomeLabConfigLoader). It does not project to IR (that's PlanStage). It does not call binaries (that's ApplyStage).
  • PlanStage plans. It does not validate (already done). It does not write files (that's GenerateStage).
  • GenerateStage generates. It does not call binaries.
  • ApplyStage applies. It does not generate.

Concretely, the line where SRP is enforced in HomeLab is Order { get; } on IHomeLabStage. By giving every stage a unique order, the pipeline makes it impossible to bundle two responsibilities into one stage without explicitly making them sequential. If you want validation and planning to happen together, you have to write two stages and number them 0 and 1. There is no ValidateAndPlanStage. The architecture rejects it.

What does an SRP violation look like? Here is the bad version:

// BAD: two responsibilities in one class
public sealed class ValidateAndLoadStage : IHomeLabStage
{
    public async Task<Result<HomeLabContext>> RunAsync(...)
    {
        // Responsibility 1: load the file
        var yaml = await File.ReadAllTextAsync(ctx.Request.ConfigPath);
        var config = _serializer.Deserialize<HomeLabConfig>(yaml);

        // Responsibility 2: validate the result
        var errors = _validator.Validate(config);
        // ...
    }
}

That class has two reasons to change: (a) the file format changes (now we accept TOML too), or (b) the validation rules change (we add a new [MetaConstraint]). Either change forces you to touch this file. The fix is to split it: IHomeLabConfigLoader does (a), IHomeLabConfigValidator does (b), and ValidateStage coordinates them. Coordination is its single responsibility.

The architecture test that enforces this:

[Fact]
public void no_stage_class_may_directly_call_file_io()
{
    var stages = typeof(HomeLabPipeline).Assembly.GetTypes()
        .Where(t => typeof(IHomeLabStage).IsAssignableFrom(t) && !t.IsInterface);

    foreach (var stage in stages)
    {
        var calls = MethodCallScanner.Scan(stage, new[] { typeof(File), typeof(Directory) });
        calls.Should().BeEmpty(
            $"{stage.Name} must coordinate, not perform I/O — use IHomeLabConfigLoader / IBundleWriter / etc.");
    }
}

That test runs in milliseconds and prevents the most common SRP slippage: "oh, I'll just read the file here".


O — Open/Closed Principle

"Software entities should be open for extension, but closed for modification."

In HomeLab, OCP is the plugin system. The lib is closed: nobody modifies HomeLab itself to add a new feature. The lib is open: anyone can ship a NuGet that adds a new contributor, a new container engine, a new TLS provider, a new DNS provider, a new Ops.Dsl sub-DSL.

The line where OCP is enforced is IEnumerable<IPackerBundleContributor> (and its sisters) in the constructor of GenerateStage. The stage does not know how many contributors there are, or which ones, or in what order they were registered. The DI container injects them all. Adding a new contributor is one new class with [Injectable]; the stage code does not change.

The bad version:

// BAD: closed for extension
public sealed class GenerateStage
{
    public Task RunAsync(...)
    {
        new AlpineBaseContributor().Apply(packer);
        new DockerHostContributor().Apply(packer);
        new PodmanHostContributor().Apply(packer);  // ← every new kind requires editing this method
    }
}

That class has the contributor list hard-coded. Adding a FreeBSD jail host contributor requires editing this file, recompiling the lib, and shipping a new HomeLab version. That is the opposite of open for extension.

OCP and DI are the same principle viewed from two angles. If your dependencies come from IEnumerable<T> constructor parameters, you are open for extension by construction. If your dependencies come from new calls, you are closed.


L — Liskov Substitution Principle

"Subtypes must be substitutable for their base types."

LSP is the trickiest of the five because it is the one most often misread as "derived classes can be used wherever base classes can", which is true but not the point. The actual point is "derived classes must honor the contract of the base class, including invisible contracts like exception behavior, thread safety, and performance characteristics".

In HomeLab, LSP is enforced by the Result<T> discipline. Every method that can fail returns Result<T>. No method throws an exception for control-flow reasons. Concrete implementations cannot violate this — they have to return Result<T>, the type system requires it.

The line where LSP is enforced is the return type of every interface method:

public interface IHomeLabConfigValidator
{
    Result Validate(HomeLabConfig config);
}

public interface ITlsCertificateProvider
{
    Task<Result<TlsCertificateBundle>> GenerateCaAsync(string caName, CancellationToken ct);
    Task<Result<TlsCertificateBundle>> GenerateCertAsync(TlsCertificateBundle ca, string domain, string[] sans, CancellationToken ct);
}

public interface IPackerClient
{
    Task<Result<PackerBuildOutput>> BuildAsync(string workingDir, CancellationToken ct);
}

Every implementation of ITlsCertificateProvider (the native one, the mkcert one, a future Vault one) returns Result<TlsCertificateBundle>. None of them throw InvalidProviderException to signal "unknown algorithm". None of them return null to signal "file not found". They all honour the same contract: success carries a TlsCertificateBundle, failure carries a list of errors. A consumer that uses ITlsCertificateProvider does not need to know which implementation it is talking to. It calls the method, branches on IsSuccess, and moves on.

This is LSP in practice: you can substitute a fake provider for the real one in tests, a Vault provider for the native one in production, an mkcert provider for the native one on macOS, and the calling code never changes.

The bad version:

// BAD: throws exceptions for control flow
public sealed class NativeTlsProvider : ITlsCertificateProvider
{
    public async Task<TlsCertificateBundle> GenerateCaAsync(string caName, CancellationToken ct)
    {
        if (caName.Length > 64)
            throw new ArgumentException("CA name too long");  // ← LSP violation
        // ...
    }
}

The base contract says "failure is a Result<T> with errors". The derived class signals failure by throwing. Any consumer that catches the contract failure now has to also wrap calls in try/catch for this specific implementation. The substitution is broken. The fix is return Result.Failure<TlsCertificateBundle>("CA name too long").


I — Interface Segregation Principle

"Clients should not be forced to depend on interfaces they do not use."

In HomeLab, ISP is one contributor interface per kind. There is no IUniversalContributor that has methods for every kind of artifact. Each contributor implements exactly the interface for the kind of artifact it produces:

public interface IPackerBundleContributor
{
    void Contribute(PackerBundle bundle);
}

public interface IComposeFileContributor
{
    void Contribute(ComposeFile compose);
}

public interface ITraefikContributor
{
    void Contribute(TraefikDynamicConfig traefik);
}

public interface IMachineTypeContributor
{
    void Contribute(VosMachine machine);
}

A contributor that adds a Docker host overlay implements IPackerBundleContributor and IMachineTypeContributor — two small interfaces — instead of IUniversalContributor with 17 methods, 15 of which are no-ops.

The bad version:

// BAD: one fat interface that everyone implements
public interface IContributor
{
    void ContributeToPacker(PackerBundle bundle);
    void ContributeToCompose(ComposeFile compose);
    void ContributeToTraefik(TraefikDynamicConfig traefik);
    void ContributeToMachine(VosMachine machine);
    void ContributeToDns(DnsConfig dns);
    void ContributeToTls(TlsConfig tls);
    // ...
}

That interface forces every contributor to implement six methods, even if they only need one. Five of those methods are empty. Empty methods are noise; noise hides bugs. ISP says: split the interface into role-shaped pieces, and let each contributor implement only the roles it needs.

The architecture test:

[Fact]
public void contributor_interfaces_must_have_at_most_one_method()
{
    var contributorInterfaces = typeof(IPackerBundleContributor).Assembly
        .GetTypes()
        .Where(t => t.IsInterface && t.Name.EndsWith("Contributor"));

    foreach (var iface in contributorInterfaces)
    {
        iface.GetMethods().Should().HaveCount(1,
            $"{iface.Name} should have exactly one method (ISP)");
    }
}

D — Dependency Inversion Principle

"High-level modules should not depend on low-level modules. Both should depend on abstractions."

In HomeLab, DIP is the composition root. The entire lib has one place where concrete types are bound to interfaces, and that place is generated by the [Injectable] source generator. No class news up its dependencies. No class calls a static factory. No class reaches into a service locator. Every dependency comes through a constructor parameter, typed against an interface.

The line where DIP is enforced is — paradoxically — the absence of new calls in stage classes. There is no new VagrantClient() anywhere in ApplyStage. There is IVagrantClient _vagrant injected through the constructor.

The architecture test:

[Fact]
public void stages_must_not_construct_their_dependencies()
{
    var stages = typeof(HomeLabPipeline).Assembly.GetTypes()
        .Where(t => typeof(IHomeLabStage).IsAssignableFrom(t) && !t.IsInterface);

    foreach (var stage in stages)
    {
        var newCalls = ConstructorScanner.Scan(stage, excluding: new[]
        {
            typeof(HomeLabContext), typeof(GeneratedArtifacts), typeof(AppliedActions),
            typeof(HomeLabPlan), typeof(List<>), typeof(string)
        });
        newCalls.Should().BeEmpty(
            $"{stage.Name} should receive its dependencies via constructor injection, not construct them");
    }
}

new-ing data records (HomeLabContext, etc.) is fine. new-ing services is not. The scanner walks the IL and reports any newobj instruction whose target is not in the allow-list. If you slip and write new VagrantClient() inside a stage, the test fails before the next CI run.

DIP is the most powerful of the five, because it is the one that enables all the others. SRP works only when classes don't entangle themselves through new calls. OCP works only when extension points come through DI. LSP works only when consumers depend on interfaces, not concretes. ISP works only when interfaces are injected as roles.


DRY — Don't Repeat Yourself

"Every piece of knowledge must have a single, unambiguous, authoritative representation within a system."

DRY in HomeLab takes one specific form: IBundleWriter. Every artifact HomeLab generates — Packer HCL, Vagrantfile, docker-compose.yaml, Traefik static config, Traefik dynamic config, gitlab.rb, runner config, certificate files — goes through one writer interface:

public interface IBundleWriter
{
    Task<Result<WrittenFile>> WritePackerAsync(PackerBundle bundle, DirectoryInfo outputDir, CancellationToken ct);
    Task<Result<WrittenFile>> WriteComposeAsync(ComposeFile compose, DirectoryInfo outputDir, CancellationToken ct);
    Task<Result<WrittenFile>> WriteTraefikAsync(TraefikDynamicConfig traefik, DirectoryInfo outputDir, CancellationToken ct);
    Task<Result<WrittenFile>> WriteRubyAsync(GitLabRbConfig gitlabRb, DirectoryInfo outputDir, CancellationToken ct);
    Task<Result<WrittenFile>> WriteCertificateAsync(byte[] pem, DirectoryInfo outputDir, string name, CancellationToken ct);
    Task<Result<WrittenFile>> WriteJsonAsync<T>(T obj, DirectoryInfo outputDir, string name, CancellationToken ct);
    Task<Result<WrittenFile>> WriteYamlAsync<T>(T obj, DirectoryInfo outputDir, string name, CancellationToken ct);
}

Every method in IBundleWriter does the same five things: ensures the output directory exists, computes the destination path, serialises the object via a typed serializer, writes it to disk via IFileSystem, returns a Result<WrittenFile> with metadata. The five steps are written once, in BundleWriter, and reused by every method. The serializer for each kind is itself a typed service (IComposeSerializer, ITraefikSerializer, etc.) — also written once.

The bad version:

// BAD: each generator writes its own files, with its own logic
public sealed class GenerateComposeStage
{
    public async Task RunAsync(...)
    {
        if (!Directory.Exists(outputDir)) Directory.CreateDirectory(outputDir);
        var path = Path.Combine(outputDir, "docker-compose.yaml");
        var yaml = ComposeSerializer.Serialize(compose);
        await File.WriteAllTextAsync(path, yaml);
        // ...
    }
}

public sealed class GenerateTraefikStage
{
    public async Task RunAsync(...)
    {
        if (!Directory.Exists(outputDir)) Directory.CreateDirectory(outputDir);  // ← copy-paste
        var path = Path.Combine(outputDir, "traefik.yaml");                      // ← copy-paste
        var yaml = TraefikSerializer.Serialize(traefik);                          // ← copy-paste
        await File.WriteAllTextAsync(path, yaml);                                 // ← copy-paste
        // ...
    }
}

Five lines copy-pasted across N generators. Every change to the writing logic — adding error handling, adding events, adding atomicity, adding chmod — has to be made N times. The first time you forget one of them, you have a bug.

DRY says: write it once, in BundleWriter. Inject it everywhere. Every change to the writing logic is one change in one file, applied uniformly to every artifact.

The architecture test:

[Fact]
public void no_stage_or_contributor_may_call_file_directly()
{
    var assemblies = new[] { typeof(HomeLabPipeline).Assembly };
    var classes = assemblies.SelectMany(a => a.GetTypes())
        .Where(t => t.IsClass && !t.IsAbstract)
        .Where(t => typeof(IHomeLabStage).IsAssignableFrom(t)
                 || t.Name.EndsWith("Contributor"));

    foreach (var c in classes)
    {
        var calls = MethodCallScanner.Scan(c, new[] { typeof(File), typeof(StreamWriter) });
        calls.Should().BeEmpty(
            $"{c.Name} must use IBundleWriter, not File / StreamWriter directly (DRY)");
    }
}

What this gives you that bash doesn't

A bash script has no SOLID and no DRY. Every script is its own world. Every script copy-pastes from every other script. There is no "interface" to substitute, no "single responsibility" to enforce, no "open for extension" to invoke. The first time you want to do one thing slightly differently, you fork the script.

SOLID + DRY in HomeLab gives you, for the same surface area:

  • Stages you can replace (OCP via plugins)
  • Stages you can test in isolation (SRP via the pipeline)
  • Implementations you can swap (LSP via Result<T>)
  • Contracts you actually need (ISP via small contributor interfaces)
  • A composition root that DI generates (DIP via [Injectable])
  • One writer for every artifact (DRY via IBundleWriter)
  • Architecture tests that enforce all of the above at unit-test speed

The bargain is the slowest to learn and the fastest to compound. Once your team internalises "every stage is one responsibility, every dependency is an interface, every artifact goes through IBundleWriter", you stop arguing about where to put new code and start finding where to put new code. The architecture tells you. The architecture tests catch you when you slip.


⬇ Download