Skip to main content
Welcome. This site supports keyboard navigation and screen readers. Press ? at any time for keyboard shortcuts. Press [ to focus the sidebar, ] to focus the content. High-contrast themes are available via the toolbar.
serard@dev00:~/cv

Part 17: CSI — local-path vs Longhorn vs OpenEBS

"Pick the simplest CSI that supports the access modes your workloads need. local-path is the simplest. Longhorn is the next step."


Why

Persistent volumes are the second most common reason a workload behaves differently in dev than in production. Real CSI drivers replicate, snapshot, expand, and survive node loss. The toy hostPath provisioner does none of those things. If your production cluster uses a serious CSI (Longhorn, Rook/Ceph, OpenEBS, the cloud provider's), your dev cluster should use the same kind of CSI — or at least one that supports the same access modes — or your PersistentVolumeClaim test is a lie.

The thesis: K8s.Dsl ships three CSI contributors. local-path for solo k8s-single. Longhorn for k8s-multi and k8s-ha (the realistic default). OpenEBS as a third option for users who want it. The user picks via k8s.csi. The default StorageClass is set automatically so downstream Helm charts (CloudNativePG, MinIO, GitLab) get the right one without per-chart configuration.


The shape

[Injectable(ServiceLifetime.Singleton)]
public sealed class LocalPathCsiContributor : IK8sManifestContributor
{
    public string TargetCluster => "*";
    public bool ShouldContribute() =>
        (_config.K8s?.Csi ?? DefaultCsiForTopology()) == "local-path";

    private string DefaultCsiForTopology()
    {
        return _config.K8s?.Topology switch
        {
            "k8s-single" => "local-path",
            "k8s-multi"  => "longhorn",
            "k8s-ha"     => "longhorn",
            _ => "local-path"
        };
    }

    public void Contribute(KubernetesBundle bundle)
    {
        if (!ShouldContribute()) return;

        // local-path-provisioner is a small DaemonSet from rancher
        var manifest = EmbeddedResources.LoadLocalPathProvisionerManifest();
        bundle.CrdInstances.AddRange(KubernetesYamlDeserializer.SplitDocuments(manifest));

        // Mark local-path as the default StorageClass
        bundle.CrdInstances.Add(new RawManifest
        {
            ApiVersion = "storage.k8s.io/v1",
            Kind = "StorageClass",
            Metadata = new()
            {
                Name = "local-path",
                Annotations = new() { ["storageclass.kubernetes.io/is-default-class"] = "true" }
            },
            Spec = new Dictionary<string, object?>
            {
                ["provisioner"] = "rancher.io/local-path",
                ["volumeBindingMode"] = "WaitForFirstConsumer",
                ["reclaimPolicy"] = "Delete"
            }
        });
    }
}

[Injectable(ServiceLifetime.Singleton)]
public sealed class LonghornHelmReleaseContributor : IHelmReleaseContributor
{
    public string TargetCluster => "*";
    public bool ShouldContribute() =>
        (_config.K8s?.Csi ?? DefaultCsi()) == "longhorn";

    public void Contribute(KubernetesBundle bundle)
    {
        if (!ShouldContribute()) return;

        bundle.HelmReleases.Add(new HelmReleaseSpec
        {
            Name = "longhorn",
            Namespace = "longhorn-system",
            Chart = "longhorn/longhorn",
            Version = "1.7.2",
            RepoUrl = "https://charts.longhorn.io",
            CreateNamespace = true,
            Wait = true,
            Timeout = TimeSpan.FromMinutes(15),
            Values = new()
            {
                ["persistence"] = new Dictionary<string, object?>
                {
                    ["defaultClass"] = true,
                    ["defaultClassReplicaCount"] = _config.K8s?.Topology == "k8s-single" ? 1 : 3,
                    ["reclaimPolicy"] = "Delete"
                },
                ["defaultSettings"] = new Dictionary<string, object?>
                {
                    ["defaultDataPath"] = "/var/lib/longhorn",
                    ["defaultReplicaCount"] = _config.K8s?.Topology == "k8s-single" ? 1 : 3,
                    ["createDefaultDiskLabeledNodes"] = false,
                    ["replicaSoftAntiAffinity"] = false   // strict anti-affinity in HA
                },
                ["ingress"] = new Dictionary<string, object?>
                {
                    ["enabled"] = true,
                    ["host"] = $"longhorn.{_config.Acme.Tld}",
                    ["tls"] = true,
                    ["tlsSecret"] = "longhorn-tls"
                }
            }
        });
    }
}

The Longhorn contributor sets the default StorageClass to itself with the right replica count for the topology (1 for single-node, 3 for multi/HA). Downstream charts (CloudNativePG, MinIO) that allocate PVCs without specifying a storageClassName get Longhorn-backed volumes automatically.


Why Longhorn over Rook/Ceph

Rook/Ceph is the industrial-strength option. It is also overkill for a dev cluster: Rook installs ~20 pods, needs at least three nodes with raw block devices, and the operational overhead is non-trivial. Longhorn is the right size for dev: ~5 pods total, uses regular filesystems, supports the same RWO/RWX access modes for the workloads we care about. For users whose production runs Ceph, the dev cluster on Longhorn is a slight downgrade in fidelity but a large upgrade in cost-of-ownership.

A future plugin could ship RookCephCsiContributor for users who want Rook in dev. K8s.Dsl core ships local-path and Longhorn.


OpenEBS — the third option

OpenEBS supports multiple "engines" (Mayastor, cStor, Jiva, LocalPV). The Mayastor engine is competitive with Longhorn for performance and uses NVMe-over-Fabrics under the hood. The LocalPV engine is essentially local-path with PVC-aware lifecycle management. K8s.Dsl ships an OpenEbsHelmReleaseContributor that defaults to LocalPV; users who want Mayastor switch via k8s.openebs_engine.

[Injectable(ServiceLifetime.Singleton)]
public sealed class OpenEbsHelmReleaseContributor : IHelmReleaseContributor
{
    public bool ShouldContribute() => _config.K8s?.Csi == "openebs";

    public void Contribute(KubernetesBundle bundle)
    {
        bundle.HelmReleases.Add(new HelmReleaseSpec
        {
            Name = "openebs",
            Namespace = "openebs",
            Chart = "openebs/openebs",
            Version = "4.1.2",
            RepoUrl = "https://openebs.github.io/charts",
            CreateNamespace = true,
            Values = new()
            {
                ["engines"] = new Dictionary<string, object?>
                {
                    ["local"] = new Dictionary<string, object?>
                    {
                        ["lvm"] = new Dictionary<string, object?> { ["enabled"] = false },
                        ["zfs"] = new Dictionary<string, object?> { ["enabled"] = false }
                    },
                    ["replicated"] = new Dictionary<string, object?>
                    {
                        ["mayastor"] = new Dictionary<string, object?>
                        {
                            ["enabled"] = _config.K8s?.OpenEbsEngine == "mayastor"
                        }
                    }
                }
            }
        });
    }
}

The default StorageClass collision problem

Multiple CSI drivers can each declare themselves as the default StorageClass. Kubernetes lets you have multiple default classes (with a warning) but downstream consumers get unpredictable behaviour. The architecture test catches this:

[Fact]
public void only_one_storage_class_is_marked_default_in_the_final_bundle()
{
    var bundle = new KubernetesBundle();
    foreach (var c in EnabledCsiContributors()) c.Contribute(bundle);

    var defaults = bundle.CrdInstances
        .Where(m => m.Kind == "StorageClass")
        .Where(m => (m.Metadata?.Annotations?.GetValueOrDefault("storageclass.kubernetes.io/is-default-class") ?? "false") == "true");

    defaults.Should().ContainSingle("only one default StorageClass should exist after applying all enabled CSI contributors");
}

The test runs at unit-test speed. It exercises the contributor enable-flag logic and ensures that exactly one CSI's default-class flag is set.


What this gives you that hostPath doesn't

A toy hostPath provisioner gives you a directory on the node. It does not survive node deletion, does not support snapshots, does not support resize, does not support RWX. Your CloudNativePG production cluster runs on real volumes with snapshots; your dev CloudNativePG running on hostPath is a different software for the purposes of testing.

A typed CSI contributor set gives you, for the same surface area:

  • A real CSI driver (Longhorn) that supports the same access modes as production
  • Topology-aware defaults (1 replica for single-node, 3 for multi-node)
  • Automatic default StorageClass so downstream charts do not need per-chart configuration
  • Architecture test that prevents two CSIs from claiming the default
  • Plugin extensibility for Rook/Ceph, Portworx, or any other future driver

The bargain pays back the first time you take a Longhorn snapshot in dev to test your backup workflow and the snapshot actually exists on disk.


⬇ Download