Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Part 30: MinIO Operator

"One operator. Many tenants. Each tenant is a self-contained MinIO with its own credentials, its own buckets, its own ingress."


Why

MinIO has two deployment modes: standalone (one server, one disk, no HA) and distributed (multiple servers, multiple disks, erasure coding, HA). On Kubernetes, the MinIO Operator wraps the distributed mode in a Tenant CRD that handles the StatefulSet, the Service, the certificates, the bucket creation, the ingress, and the metrics.

The thesis: K8s.Dsl ships MinIoOperatorHelmReleaseContributor (the operator) and lets each workload contributor declare its own Tenant CRD instance. Each tenant is isolated from others by namespace, by Service, by credentials. GitLab gets one tenant; the application registry gets another; the backup system gets a third.


The operator

[Injectable(ServiceLifetime.Singleton)]
public sealed class MinIoOperatorHelmReleaseContributor : IHelmReleaseContributor
{
    public string TargetCluster => "*";

    public void Contribute(KubernetesBundle bundle)
    {
        bundle.HelmReleases.Add(new HelmReleaseSpec
        {
            Name = "minio-operator",
            Namespace = "minio-operator",
            Chart = "minio-operator/operator",
            Version = "5.0.16",
            RepoUrl = "https://operator.min.io/",
            CreateNamespace = true,
            Wait = true,
            Values = new()
            {
                ["operator"] = new Dictionary<string, object?>
                {
                    ["replicaCount"] = _config.K8s?.Topology == "k8s-ha" ? 2 : 1,
                    ["resources"] = new Dictionary<string, object?>
                    {
                        ["requests"] = new Dictionary<string, object?> { ["cpu"] = "100m", ["memory"] = "200Mi" },
                        ["limits"] = new Dictionary<string, object?> { ["cpu"] = "500m", ["memory"] = "500Mi" }
                    }
                }
            }
        });
    }
}

A Tenant for GitLab

[Injectable(ServiceLifetime.Singleton)]
public sealed class GitLabMinIoTenantContributor : IK8sManifestContributor
{
    public string TargetCluster => "*";

    public void Contribute(KubernetesBundle bundle)
    {
        bundle.Namespaces["gitlab-data"] ??= new NamespaceManifest { Name = "gitlab-data" };

        bundle.CrdInstances.Add(new RawManifest
        {
            ApiVersion = "minio.min.io/v2",
            Kind = "Tenant",
            Metadata = new() { Name = "gitlab-minio", Namespace = "gitlab-data" },
            Spec = new Dictionary<string, object?>
            {
                ["image"] = "quay.io/minio/minio:RELEASE.2025-01-15T00-00-00Z",

                ["pools"] = new[]
                {
                    new Dictionary<string, object?>
                    {
                        ["servers"] = _config.K8s?.Topology == "k8s-ha" ? 4 : 1,
                        ["volumesPerServer"] = _config.K8s?.Topology == "k8s-ha" ? 4 : 1,
                        ["volumeClaimTemplate"] = new Dictionary<string, object?>
                        {
                            ["metadata"] = new Dictionary<string, object?> { ["name"] = "data" },
                            ["spec"] = new Dictionary<string, object?>
                            {
                                ["accessModes"] = new[] { "ReadWriteOnce" },
                                ["resources"] = new Dictionary<string, object?>
                                {
                                    ["requests"] = new Dictionary<string, object?> { ["storage"] = "10Gi" }
                                },
                                ["storageClassName"] = "longhorn"
                            }
                        }
                    }
                },

                ["mountPath"] = "/data",
                ["requestAutoCert"] = true,    // operator generates per-tenant TLS automatically

                ["users"] = new[]
                {
                    new Dictionary<string, object?> { ["name"] = "gitlab-minio-credentials" }
                },

                ["buckets"] = new[]
                {
                    new Dictionary<string, object?> { ["name"] = "gitlab-artifacts" },
                    new Dictionary<string, object?> { ["name"] = "gitlab-lfs" },
                    new Dictionary<string, object?> { ["name"] = "gitlab-uploads" },
                    new Dictionary<string, object?> { ["name"] = "gitlab-packages" },
                    new Dictionary<string, object?> { ["name"] = "gitlab-registry" },
                    new Dictionary<string, object?> { ["name"] = "gitlab-postgres-backups" }
                },

                ["env"] = new[]
                {
                    new Dictionary<string, object?>
                    {
                        ["name"] = "MINIO_PROMETHEUS_AUTH_TYPE",
                        ["value"] = "public"
                    }
                }
            }
        });
    }
}

The tenant declares:

  • One pool of MinIO servers (4 servers × 4 volumes for HA, 1 server × 1 volume for non-HA)
  • Longhorn-backed PVCs of 10 GB each
  • Auto-generated TLS via the operator's CA
  • A user secret (gitlab-minio-credentials) created by the secrets bridge from Part 10
  • Six buckets that the tenant pre-creates on first start
  • Public Prometheus metrics for the kube-prometheus-stack to scrape

The buckets are created by the operator via an initContainer on the first server pod. They are idempotent — re-creating the tenant does not re-create existing buckets.


Tenant isolation

Multiple tenants in the same cluster do not see each other:

  • They are in different namespaces (gitlab-data, acme-app-data, velero-backups, etc.)
  • Each one has its own Service (the operator names it <tenant-name>-hl.svc.cluster.local)
  • Each one has its own credentials (separate Secret per tenant)
  • Each one has its own TLS cert
  • NetworkPolicies in each namespace prevent cross-tenant traffic by default (the deny-all from Part 16)

GitLab's compose contributor wires it: https://gitlab-minio-hl.gitlab-data.svc.cluster.local:9000 is the endpoint, the credentials come from the secret. No other namespace can reach this MinIO unless we add an explicit allow-rule.


Multiple tenants per cluster

A typical Acme cluster has at least three MinIO tenants:

Tenant Purpose
gitlab-minio (in gitlab-data) GitLab artifact / LFS / registry / backup buckets
acme-app-minio (in acme-prod) The application's user-uploaded files
velero-minio (in velero-system) Velero cluster backups

Each one is a separate Tenant CRD. The operator handles all three independently. There is no shared MinIO that everything writes to — separation is structural.


What this gives you that running MinIO as a StatefulSet doesn't

A hand-rolled MinIO StatefulSet works. It does not handle: tenant isolation, automatic TLS, bucket creation, distributed mode setup, operator-managed upgrades, automated user creation, lifecycle policies via CRD.

The MinIO operator gives you, for the same surface area:

  • Tenant CRD that bundles every concern
  • Auto-TLS via the operator's CA
  • Auto-bucket creation via init containers
  • Distributed mode for HA tenants
  • Multi-tenant isolation via namespaces and Services
  • Standard upgrades via helm upgrade minio-operator

The bargain pays back the first time you create three MinIO instances on the same cluster without writing a single bash script.


⬇ Download