Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Part 30: Topology Composition — Single, Multi, HA

"The same lab, three ways. One config field. Zero forks of the codebase."


Why

DevLab is the same set of services no matter how it is deployed: GitLab, runners, Postgres, MinIO, baget, Traefik, Prometheus, Grafana, Loki, PiHole, the docs site. What changes is how those services are distributed across VMs:

  • Single — one VM running everything in one compose file. Cheap, fast to boot, fine for solo development.
  • Multi — four VMs, each with one logical role: gateway (Traefik + PiHole), platform (GitLab + runners + baget), data (Postgres + MinIO + Meilisearch), obs (Prometheus + Grafana + Loki + Alertmanager). Closer to production. Better isolation. Slower to boot.
  • HA — the GitLab Reference Architecture for HA on Omnibus VMs. Approximately ten VMs: 2× Rails behind HAProxy, 3× Gitaly Cluster + Praefect, 3× Patroni Postgres, 3× Redis Sentinel, 1× Consul, 1× shared MinIO. No Kubernetes. This is the answer to "can HomeLab support HA without K8s?" — yes.

The thesis of this part is: the choice between the three topologies is a single config field, topology: single | multi | ha. The same Ops.Dsl projections produce all three. The compose contributors are reused. The Vagrantfile is reused. The Traefik config is reused. The lib does not branch on topology — it composes from a topology-aware spec.


The shape

public enum Topology { Single, Multi, Ha }

public interface ITopologyResolver
{
    IReadOnlyList<VosMachineConfig> Resolve(Topology topology, HomeLabConfig config);
}

[Injectable(ServiceLifetime.Singleton)]
public sealed class StandardTopologyResolver : ITopologyResolver
{
    public IReadOnlyList<VosMachineConfig> Resolve(Topology topology, HomeLabConfig config)
        => topology switch
        {
            Topology.Single => SingleVm(config).ToList(),
            Topology.Multi  => MultiVm(config).ToList(),
            Topology.Ha     => HaVms(config).ToList(),
            _ => throw new InvalidOperationException()
        };

    private IEnumerable<VosMachineConfig> SingleVm(HomeLabConfig hl)
    {
        yield return new VosMachineConfig
        {
            Name = $"{hl.Name}-main",
            Box = hl.Vos.Box,
            Cpus = Math.Max(hl.Vos.Cpus, 4),       // single VM needs more
            Memory = Math.Max(hl.Vos.Memory, 8192),
            Provider = hl.Vos.Provider,
            Networks = new[] { Net($"{hl.Vos.Subnet}.10") },
            SyncedFolders = StandardSyncedFolders(),
            Provisioners = StandardProvisioners()
        };
    }

    private IEnumerable<VosMachineConfig> MultiVm(HomeLabConfig hl)
    {
        yield return new VosMachineConfig { Name = $"{hl.Name}-gateway", Box = hl.Vos.Box, Cpus = 2, Memory = 1024, Networks = new[] { Net($"{hl.Vos.Subnet}.10") }, /* ... */ };
        yield return new VosMachineConfig { Name = $"{hl.Name}-platform", Box = hl.Vos.Box, Cpus = 4, Memory = 8192, Networks = new[] { Net($"{hl.Vos.Subnet}.11") }, /* ... */ };
        yield return new VosMachineConfig { Name = $"{hl.Name}-data",     Box = hl.Vos.Box, Cpus = 2, Memory = 4096, Networks = new[] { Net($"{hl.Vos.Subnet}.12") }, /* ... */ };
        yield return new VosMachineConfig { Name = $"{hl.Name}-obs",      Box = hl.Vos.Box, Cpus = 2, Memory = 2048, Networks = new[] { Net($"{hl.Vos.Subnet}.13") }, /* ... */ };
    }

    private IEnumerable<VosMachineConfig> HaVms(HomeLabConfig hl)
    {
        // GitLab Reference Architecture, ~10-12 VMs, NO Kubernetes
        // see https://docs.gitlab.com/ee/administration/reference_architectures/

        // 1× HAProxy load balancer (front of the rails nodes)
        yield return new VosMachineConfig { Name = $"{hl.Name}-lb",       Box = hl.Vos.Box, Cpus = 2, Memory = 1024, Networks = new[] { Net($"{hl.Vos.Subnet}.10") }, /* ... */ };

        // 2× GitLab Rails nodes
        yield return new VosMachineConfig { Name = $"{hl.Name}-rails-1",  Box = hl.Vos.Box, Cpus = 4, Memory = 8192, Networks = new[] { Net($"{hl.Vos.Subnet}.11") }, /* ... */ };
        yield return new VosMachineConfig { Name = $"{hl.Name}-rails-2",  Box = hl.Vos.Box, Cpus = 4, Memory = 8192, Networks = new[] { Net($"{hl.Vos.Subnet}.12") }, /* ... */ };

        // 3× Gitaly Cluster + Praefect
        yield return new VosMachineConfig { Name = $"{hl.Name}-gitaly-1", Box = hl.Vos.Box, Cpus = 2, Memory = 4096, Networks = new[] { Net($"{hl.Vos.Subnet}.20") }, /* ... */ };
        yield return new VosMachineConfig { Name = $"{hl.Name}-gitaly-2", Box = hl.Vos.Box, Cpus = 2, Memory = 4096, Networks = new[] { Net($"{hl.Vos.Subnet}.21") }, /* ... */ };
        yield return new VosMachineConfig { Name = $"{hl.Name}-gitaly-3", Box = hl.Vos.Box, Cpus = 2, Memory = 4096, Networks = new[] { Net($"{hl.Vos.Subnet}.22") }, /* ... */ };

        // 3× Patroni Postgres
        yield return new VosMachineConfig { Name = $"{hl.Name}-pg-1",     Box = hl.Vos.Box, Cpus = 2, Memory = 4096, Networks = new[] { Net($"{hl.Vos.Subnet}.30") }, /* ... */ };
        yield return new VosMachineConfig { Name = $"{hl.Name}-pg-2",     Box = hl.Vos.Box, Cpus = 2, Memory = 4096, Networks = new[] { Net($"{hl.Vos.Subnet}.31") }, /* ... */ };
        yield return new VosMachineConfig { Name = $"{hl.Name}-pg-3",     Box = hl.Vos.Box, Cpus = 2, Memory = 4096, Networks = new[] { Net($"{hl.Vos.Subnet}.32") }, /* ... */ };

        // 3× Redis Sentinel
        yield return new VosMachineConfig { Name = $"{hl.Name}-redis-1",  Box = hl.Vos.Box, Cpus = 1, Memory = 2048, Networks = new[] { Net($"{hl.Vos.Subnet}.40") }, /* ... */ };
        yield return new VosMachineConfig { Name = $"{hl.Name}-redis-2",  Box = hl.Vos.Box, Cpus = 1, Memory = 2048, Networks = new[] { Net($"{hl.Vos.Subnet}.41") }, /* ... */ };
        yield return new VosMachineConfig { Name = $"{hl.Name}-redis-3",  Box = hl.Vos.Box, Cpus = 1, Memory = 2048, Networks = new[] { Net($"{hl.Vos.Subnet}.42") }, /* ... */ };

        // 1× MinIO (object storage shared by everything; in production this would itself be a cluster)
        yield return new VosMachineConfig { Name = $"{hl.Name}-minio",    Box = hl.Vos.Box, Cpus = 2, Memory = 4096, Networks = new[] { Net($"{hl.Vos.Subnet}.50") }, /* ... */ };
    }

    private static VosNetworkConfig Net(string ip) => new() { Type = "private_network", Ip = ip };
    // ...
}

Three methods. Three deterministic projections. The selection is one switch statement. Each method returns a list of VosMachineConfigs — the same type the rest of the lib uses. The downstream code (the compose generator, the Traefik generator, the Vagrantfile writer) does not know or care which topology produced the list.


The Mermaid diagrams

Each topology has a generated diagram, used in documentation and emitted by homelab plan --diagram:

Diagram
The single topology is the laptop-friendly projection — one VM, every container, same machine list contract as the richer topologies.
Diagram
The multi topology slices DevLab into four role-specific VMs so each concern — ingress, platform, data, observability — can be scaled and rebooted in isolation.
Diagram
The HA topology is the honest twelve-VM answer to GitLab's own reference architecture — no Kubernetes, every role replicated where it matters.

The diagrams are generated, committed, and rendered by the docs site. They are not aspirational drawings — they are the actual machine list the projector emits.


The future fourth: K8s

K8s.Dsl does not exist as a C# library yet, but the design accommodates it. When it lands, a fourth case will be added:

public IReadOnlyList<VosMachineConfig> Resolve(Topology topology, HomeLabConfig config)
    => topology switch
    {
        Topology.Single => SingleVm(config).ToList(),
        Topology.Multi  => MultiVm(config).ToList(),
        Topology.Ha     => HaVms(config).ToList(),
        Topology.HaK8s  => HaK8sVms(config).ToList(),   // ← future
        _ => throw new InvalidOperationException()
    };

HaK8sVms would provision a small k3s cluster (one or three control planes plus N workers) and let K8s.Dsl emit Kubernetes manifests for the GitLab cloud-native chart. The same Ops.Dsl projections feed it; the same observability stack still applies; the same backup framework still works.

The point is: the architecture is open to a fourth topology without breaking the first three.


The wiring

StandardTopologyResolver is [Injectable]. The Plan stage of the pipeline calls it once per pipeline run:

public async Task<Result<HomeLabContext>> RunAsync(HomeLabContext ctx, CancellationToken ct)
{
    var topology = Enum.Parse<Topology>(ctx.Config!.Topology, ignoreCase: true);
    var machines = _resolver.Resolve(topology, ctx.Config);

    var ir = _projector.Project(ctx.Config, machines);
    var dag = _dagBuilder.Build(ir, machines);

    return Result.Success(ctx with { Plan = new HomeLabPlan(IR: ir, Machines: machines, Dag: dag.Value) });
}

The compose contributors, the Traefik contributors, the Vagrantfile writer, the box registry, and the observability stack all consume ctx.Plan.Machines without caring how the list was produced.


The test

[Fact]
public void single_topology_returns_one_machine()
{
    var resolver = new StandardTopologyResolver();
    var machines = resolver.Resolve(Topology.Single, StandardConfig());
    machines.Should().ContainSingle();
    machines[0].Name.Should().EndWith("-main");
}

[Fact]
public void multi_topology_returns_four_machines_with_distinct_ips()
{
    var resolver = new StandardTopologyResolver();
    var machines = resolver.Resolve(Topology.Multi, StandardConfig());
    machines.Should().HaveCount(4);
    machines.Select(m => m.Networks[0].Ip).Should().OnlyHaveUniqueItems();
}

[Fact]
public void ha_topology_returns_at_least_nine_machines()
{
    var resolver = new StandardTopologyResolver();
    var machines = resolver.Resolve(Topology.Ha, StandardConfig());
    machines.Should().HaveCountGreaterThan(8);
    machines.Should().Contain(m => m.Name.EndsWith("-lb"));
    machines.Should().Contain(m => m.Name.Contains("rails"));
    machines.Should().Contain(m => m.Name.Contains("gitaly"));
    machines.Should().Contain(m => m.Name.Contains("pg"));
    machines.Should().Contain(m => m.Name.Contains("redis"));
}

[Fact]
public void same_compose_contributor_runs_for_all_topologies()
{
    var contributor = new GitLabComposeContributor(/* ... */);

    var single = new ComposeFile(); contributor.Contribute(single);
    var multi  = new ComposeFile(); contributor.Contribute(multi);
    var ha     = new ComposeFile(); contributor.Contribute(ha);

    // The contributor produces the same `gitlab` service definition regardless of topology
    single.Services["gitlab"].Image.Should().Be(multi.Services["gitlab"].Image);
    multi.Services["gitlab"].Image.Should().Be(ha.Services["gitlab"].Image);
}

The fourth test is the most important: it asserts that the contributors are topology-agnostic. The topology resolver decides where services land; the contributors decide what the services are. Separation of concerns.


What this gives you that bash doesn't

A bash script that supports three topologies is three bash scripts with shared functions sourced from a fourth. Every change to a service has to be made in N places. The scripts drift. The third topology gets abandoned within two quarters.

A typed topology resolver with shared contributors gives you, for the same surface area:

  • One config field to switch
  • Three deterministic projections to a typed VosMachineConfig list
  • Topology-agnostic contributors that produce the same services regardless of topology
  • An open design for a future fourth topology (K8s.Dsl-driven HA on Kubernetes)
  • Mermaid diagrams generated from the projections, used in docs and CLI output
  • Tests that lock the projections and assert the contributor independence

The bargain pays back the first time you switch from single to multi and watch DevLab redistribute the same services across four VMs without you editing a single compose file.


End of Act IV

We have now built every layer of the VM substrate: the Alpine base, the Docker overlay, the Podman overlay, the optional hardening, the Vagrant box registry, and the topology composition picker. With this in hand, homelab vos up boots one or more VMs with the right operating system, the right container engine, the right security posture, and the right place in the network topology — all from one config file.

Act V is the next layer up: the compose files that run inside those VMs, the Traefik config that routes between them, and the TLS + DNS that make the whole thing addressable.


⬇ Download