Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Part 39: DevLab GitLab and Runners — Three Flavors, One Truth

"Yes, you can run HA GitLab without Kubernetes. The Reference Architecture has been a thing since GitLab 13."


Why

GitLab is the centerpiece of DevLab. It is also the service with the most variation across topologies:

  • Single-VM: GitLab Omnibus runs in one container on one VM, Postgres and MinIO are sibling containers on the same VM, Traefik fronts everything.
  • Multi-VM: Same GitLab Omnibus, but on a dedicated platform VM, with Postgres and MinIO on a separate data VM. The compose contributor for gitlab is the same; only the network names and the Postgres/MinIO hostnames differ (resolved via PiHole).
  • HA: The full GitLab Reference Architecture, on Omnibus, on bare VMs. No Kubernetes. Two Rails nodes behind HAProxy, three Gitaly nodes with Praefect, three Patroni Postgres nodes, three Redis Sentinel nodes, one Consul, MinIO. Approximately ten VMs.

The thesis of this part is: all three flavors share the same gitlab.rb generator (we wrote it in Part 24), parameterised by the topology. Each flavor has its own set of contributors that compose to a different compose-file layout. The runner registration flow is the same for all three.


Single-VM and Multi-VM: same compose service

Both flavors use the GitLabComposeContributor from Part 31. The difference between them is:

  • Single-VM: TargetVm = "main". Postgres host is postgres (sibling container in the same compose project).
  • Multi-VM: TargetVm = "platform". Postgres host is data.frenchexdev.lab (different VM, resolved via PiHole).

The compose contributor reads the topology from the config to decide which Postgres host to embed in the GitLab env vars:

public void Contribute(ComposeFile compose)
{
    var pgHost = _config.Topology switch
    {
        "single" => "postgres",
        "multi"  => $"data.{_config.Acme.Tld}",
        "ha"     => "patroni-vip.frenchexdev.lab",  // HAProxy in front of Patroni
        _ => throw new InvalidOperationException()
    };

    compose.Services["gitlab"] = new ComposeService
    {
        // ... as before ...
        Environment = new()
        {
            ["GITLAB_OMNIBUS_CONFIG"] = "",   // see below
            // (the Omnibus config is in /etc/gitlab/gitlab.rb mounted from the host)
        }
    };
}

The gitlab.rb itself is generated by GitLabRbGenerator (from Part 24), which also reads the topology and emits the right gitlab_rails['db_host'] and gitlab_rails['db_database'] accordingly.


HA: a separate set of contributors

The HA topology has its own contributors. Each one targets a specific HA component:

[Injectable(ServiceLifetime.Singleton)]
public sealed class HaproxyComposeContributor : IComposeFileContributor
{
    public string TargetVm => "lb";
    public bool ShouldContribute() => _config.Topology == "ha";

    public void Contribute(ComposeFile compose)
    {
        compose.Services["haproxy"] = new ComposeService
        {
            Image = "haproxy:2.9-alpine",
            Restart = "always",
            Volumes = new() { "./haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro" },
            Ports = new() { "80:80", "443:443" },
            Networks = new() { "frontend" }
        };
        // The haproxy.cfg is generated by HaproxyConfigGenerator from the rails-1, rails-2 backends
    }
}

[Injectable(ServiceLifetime.Singleton)]
public sealed class GitLabHaRailsContributor : IComposeFileContributor
{
    public string TargetVm => "rails";   // the contributor is registered for both rails-1 and rails-2 by the partitioner
    public bool ShouldContribute() => _config.Topology == "ha";

    public void Contribute(ComposeFile compose)
    {
        compose.Services["gitlab-rails"] = new ComposeService
        {
            Image = "gitlab/gitlab-ce:16.11.0-ce.0",
            Restart = "always",
            // Rails-only role: this Omnibus instance has roles ['application_role', 'sidekiq_role']
            // and the gitlab.rb is generated to disable everything else (no postgres, no gitaly, no redis)
            Volumes = new() { "./gitlab-rails/config:/etc/gitlab" },
            Networks = new() { "platform" }
        };
    }
}

[Injectable(ServiceLifetime.Singleton)]
public sealed class GitalyClusterContributor : IComposeFileContributor
{
    public string TargetVm => "gitaly";   // gitaly-1, gitaly-2, gitaly-3
    public bool ShouldContribute() => _config.Topology == "ha";

    public void Contribute(ComposeFile compose)
    {
        compose.Services["gitaly"] = new ComposeService
        {
            Image = "gitlab/gitlab-ce:16.11.0-ce.0",
            // gitlab.rb sets gitaly['enable'] = true and disables every other role
            Volumes = new() { "./gitaly/config:/etc/gitlab", "gitaly_data:/var/opt/gitlab" },
            Networks = new() { "data-net" }
        };
        compose.Volumes["gitaly_data"] ??= new ComposeVolume { Driver = "local" };
    }
}

[Injectable(ServiceLifetime.Singleton)]
public sealed class PraefectContributor : IComposeFileContributor
{
    public string TargetVm => "gitaly";   // praefect runs alongside gitaly
    public bool ShouldContribute() => _config.Topology == "ha";

    public void Contribute(ComposeFile compose)
    {
        compose.Services["praefect"] = new ComposeService
        {
            Image = "gitlab/gitlab-ce:16.11.0-ce.0",
            // gitlab.rb: praefect['enable'] = true, virtual_storage with the three gitaly nodes
            Volumes = new() { "./praefect/config:/etc/gitlab" },
            Networks = new() { "data-net" }
        };
    }
}

[Injectable(ServiceLifetime.Singleton)]
public sealed class PatroniContributor : IComposeFileContributor
{
    public string TargetVm => "pg";   // pg-1, pg-2, pg-3
    public bool ShouldContribute() => _config.Topology == "ha";

    public void Contribute(ComposeFile compose)
    {
        compose.Services["patroni"] = new ComposeService
        {
            Image = "ghcr.io/zalando/patroni:3.3.0",
            Volumes = new()
            {
                "patroni_data:/var/lib/postgresql",
                "./patroni/patroni.yml:/etc/patroni/patroni.yml:ro"
            },
            Networks = new() { "data-net" },
            Environment = new()
            {
                ["PATRONI_NAME"] = "{{ .Vm.Name }}",
                ["PATRONI_SCOPE"] = "devlab-cluster",
                ["PATRONI_RESTAPI_LISTEN"] = "0.0.0.0:8008"
            }
        };
        compose.Volumes["patroni_data"] ??= new ComposeVolume { Driver = "local" };
    }
}

[Injectable(ServiceLifetime.Singleton)]
public sealed class RedisSentinelContributor : IComposeFileContributor
{
    public string TargetVm => "redis";    // redis-1, redis-2, redis-3
    public bool ShouldContribute() => _config.Topology == "ha";

    public void Contribute(ComposeFile compose)
    {
        compose.Services["redis"] = new ComposeService
        {
            Image = "redis:7-alpine",
            Volumes = new() { "./redis/redis.conf:/etc/redis/redis.conf:ro" },
            Networks = new() { "data-net" }
        };
        compose.Services["sentinel"] = new ComposeService
        {
            Image = "redis:7-alpine",
            Volumes = new() { "./redis/sentinel.conf:/etc/redis/sentinel.conf:ro" },
            Networks = new() { "data-net" },
            Command = "redis-sentinel /etc/redis/sentinel.conf"
        };
    }
}

Six new contributors, all gated by ShouldContribute() => _config.Topology == "ha". The single-VM and multi-VM contributors are gated the other way (ShouldContribute() => _config.Topology != "ha"). The two sets of contributors are mutually exclusive.

The key insight: the gitlab.rb generator is parameterised by topology and produces different configs for different roles. In single-VM mode, one gitlab.rb enables every Omnibus role. In HA mode, each role-specific Omnibus container gets a gitlab.rb that enables only the roles that VM is responsible for. The generator handles the difference; the compose contributors just mount the right gitlab.rb from the right path.


Runner registration: same flow for all three

The runner registration flow is identical regardless of topology. The runner needs:

  1. The CI server URL (always https://gitlab.frenchexdev.lab, regardless of where GitLab actually runs)
  2. The registration token (created via the GitLab API)
  3. The trusted CA cert (from the host's data/certs/ca.crt)
[Injectable(ServiceLifetime.Singleton)]
public sealed class GitLabRunnerRegisterRequestHandler : IRequestHandler<GitLabRunnerRegisterRequest, Result<GitLabRunnerRegisterResponse>>
{
    private readonly IGitLabApi _api;
    private readonly ISecretStore _secrets;
    private readonly IDockerClient _docker;
    private readonly IHomeLabEventBus _events;

    public async Task<Result<GitLabRunnerRegisterResponse>> HandleAsync(GitLabRunnerRegisterRequest req, CancellationToken ct)
    {
        // 1. Read the admin PAT from the secret store
        var pat = await _secrets.ReadAsync("GITLAB_ROOT_PAT", ct);
        if (pat.IsFailure) return pat.Map<GitLabRunnerRegisterResponse>();

        // 2. Create a runner registration token via the modern API (GitLab 16+)
        var token = await _api.CreateRunnerAsync(
            new CreateRunnerRequest
            {
                Description = req.RunnerName,
                RunUntagged = true,
                Locked = false,
                AccessLevel = "ref_protected"
            },
            pat.Value, ct);
        if (token.IsFailure) return token.Map<GitLabRunnerRegisterResponse>();

        // 3. Update the runner config inside the platform VM
        var registerCmd = new[]
        {
            "register",
            "--non-interactive",
            "--url", $"https://gitlab.{_config.Acme.Tld}",
            "--token", token.Value.Token,
            "--executor", "docker",
            "--docker-image", "alpine:latest",
            "--description", req.RunnerName,
            "--tls-ca-file", "/etc/gitlab-runner/certs/ca.crt"
        };

        var execResult = await _docker.ExecAsync(
            container: "gitlab-runner",
            command: new[] { "gitlab-runner" }.Concat(registerCmd).ToArray(),
            ct: ct);
        if (execResult.IsFailure) return execResult.Map<GitLabRunnerRegisterResponse>();

        await _events.PublishAsync(new GitLabRunnerRegistered(req.RunnerName, "docker", DateTimeOffset.UtcNow), ct);
        return Result.Success(new GitLabRunnerRegisterResponse(req.RunnerName, token.Value.Token));
    }
}

The handler is topology-agnostic. In single-VM, the docker client targets the only VM. In multi-VM, it targets the platform VM. In HA, it targets the rails VM (where the runner registration is processed). The CI server URL is always https://gitlab.frenchexdev.lab because that hostname always resolves to the right place via PiHole.


Why no shared runners

GitLab.com offers shared runners. We do not use them. Reasons:

  1. Network: shared runners run on GitLab.com's infrastructure. They cannot reach https://baget.frenchexdev.lab (private), cannot mount the host's docker socket, cannot pull boxes from https://registry.frenchexdev.lab. The dogfood loops would all break.
  2. Privacy: a homelab project may contain secrets, customer data, or work-in-progress code. Sending it to a shared runner is unnecessarily risky.
  3. Cost: shared runner minutes are free up to a quota. The quota is small enough that a moderately active project hits it quickly.
  4. Reproducibility: a runner that runs on infrastructure HomeLab provisioned is reproducible. A runner on shared infrastructure depends on whatever GitLab.com decides.

The runner is deliberately part of DevLab. The dogfood loop requires it.


The test

[Fact]
public void single_vm_topology_uses_local_postgres_host()
{
    var config = StandardConfig() with { Topology = "single" };
    var rb = new GitLabRbGenerator().Generate(config);
    rb.GitLabRails["db_host"].Should().Be("postgres");
}

[Fact]
public void multi_vm_topology_uses_data_vm_postgres_host()
{
    var config = StandardConfig() with { Topology = "multi", Acme = new() { Tld = "lab" } };
    var rb = new GitLabRbGenerator().Generate(config);
    rb.GitLabRails["db_host"].Should().Be("data.lab");
}

[Fact]
public void ha_topology_uses_patroni_vip_postgres_host()
{
    var config = StandardConfig() with { Topology = "ha" };
    var rb = new GitLabRbGenerator().Generate(config);
    rb.GitLabRails["db_host"].Should().Contain("patroni");
}

[Fact]
public void ha_topology_runs_six_extra_contributors()
{
    var config = StandardConfig() with { Topology = "ha" };
    var contributors = AllComposeContributors().Where(c => c.ShouldContribute(config)).ToList();

    contributors.Should().Contain(c => c is HaproxyComposeContributor);
    contributors.Should().Contain(c => c is GitLabHaRailsContributor);
    contributors.Should().Contain(c => c is GitalyClusterContributor);
    contributors.Should().Contain(c => c is PraefectContributor);
    contributors.Should().Contain(c => c is PatroniContributor);
    contributors.Should().Contain(c => c is RedisSentinelContributor);
}

[Fact]
public async Task runner_registration_works_against_any_topology()
{
    foreach (var topology in new[] { "single", "multi", "ha" })
    {
        using var lab = await TestLab.NewAsync(name: $"runner-{topology}", topology: topology);
        var result = await lab.Cli("gitlab", "runner", "register", "--name", "test-runner");
        result.ExitCode.Should().Be(0, $"runner registration failed in {topology} topology");
    }
}

What this gives you that bash doesn't

A bash script that brings up GitLab in three different topologies is three different scripts. Each one drifts. The HA script in particular is the median piece of evidence in any "we tried HA on bare VMs once" repo: a thousand lines of fragile shell that nobody has touched in a year.

A typed three-flavor GitLab story gives you, for the same surface area:

  • Three flavors built from a shared gitlab.rb generator and a topology-aware compose contributor set
  • HA without Kubernetes via the GitLab Reference Architecture on Omnibus
  • Same runner registration flow for all three
  • ShouldContribute() gating so HA contributors only run in HA mode
  • Tests that lock topology-specific generator output

The bargain pays back the first time someone says "can we test HA before we ship it" and you spin up an ha topology in parallel with the single developer instance and validate the upgrade path end to end.


⬇ Download