rancher VS longhorn

Compare rancher vs longhorn and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
rancher longhorn
89 77
22,559 5,612
0.6% 2.2%
9.9 9.4
7 days ago 5 days ago
Go Shell
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

rancher

Posts with mentions or reviews of rancher. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-25.
  • OpenTF Announces Fork of Terraform
    28 projects | news.ycombinator.com | 25 Aug 2023
    Did something happen to the Apache 2 rancher? https://github.com/rancher/rancher/blob/v2.7.5/LICENSE RKE2 is similarly Apache 2: https://github.com/rancher/rke2/blob/v1.26.7%2Brke2r1/LICENS...
  • Kubernetes / Rancher 2, mongo-replicaset with Local Storage Volume deployment
    1 project | /r/codehunter | 14 Jun 2023
    I follow the 4 ABCD steps bellow, but the first pod deployment never ends. What's wrong in it? Logs and result screens are at the end. Detailed configuration can be found here.
  • Trouble with RKE2 HA Setup: Part 2
    2 projects | /r/rancher | 8 May 2023
  • Critical vulnerability (CVE-2023-22651) in Rancher 2.7.2 - Update to 2.7.3
    1 project | /r/rancher | 24 Apr 2023
    CVE-2023-22651 is rated 9.9/10 : https://github.com/rancher/rancher/security/advisories/GHSA-6m9f-pj6w-w87g
  • What's your take if DevOps colleague always got new initiative / idea?
    1 project | /r/devops | 17 Apr 2023
    Depends. When I came into my last company I immediately noticed the lack of reproducible environments. Brought this up a few times and was met with some resistance because "we didn't have the capacity"... Until prod went down and it took us 23 hours to bring it back up due to spaghetti terraform.
  • Questions about Rancher Launched/imported AKS
    1 project | /r/rancher | 14 Apr 2023
    For the latest releases of rancher: https://github.com/rancher/rancher/releases When is Rancher 2.7.1 going to be released? The Rancher support matrix for 2.7.1 shows k8s v1.24.6 as the highest supported version and Azure will drop AKS v1.24 in a few months... Should this be a concern for us? What could happen if we create our cluster with Rancher for an unsupported K8s version? 1.25 for example. - Rancher 2.7.2 just got released including support for 1.25. I have however tested running unsupported versions before, unless there is major deprecations in the kubernetes API it is fine in my experience. If we move to AKS imported clusters, in case we add node pools, and upgrade the cluster, will those changes be reflected in the Rancher Platform? - Yep! If we face some issues by running an unsupported K8s version on Rancher Launched K8s clusters, is it possible to remove it from Rancher, do the stuff we need, and then import it into the platform? - Yes, however be careful and do testing before doing in prod. From top of mind: Remove cluster from rancher (if imported), if rancher created you might want to revoke ranchers SA key for the cluster first (so it can't remove it). Delete the cattle-system namespace, and any other cattle-* namespaces you don't want to keep. And do your thing. It looks like AKS is faster than Rancher regarding supported Kubernetes versions... We would like to know if Rancher will always be on track with AKS regarding the removal of K8s version support and new versions. - In my experience yes. (Been using rancher on all three clouds for a 4 years now). What are exactly the big differences between imported AKS and Rancher-launched AKS? What should we look at, and what issues can we face when using one or another? - The main difference is that rancher will not be able to upgrade the cluster for you. You will have to do that yourself.
  • rancher2_bootstrap.admin resource fail after Kubernetes v1.23.15
    1 project | /r/rancher | 29 Mar 2023
    variable "rancher" { type = object({ namespace = string version = string branch = string chart_set = list(object({ name = string value = string })) }) default = { namespace = "cattle-system" # There is a bug with destroying the cloud credentials in version 2.6.9 until 2.7.1 and will be fixed in next release 2.7.2. # See https://github.com/rancher/rancher/issues/39300 version = "2.7.0" branch = "stable" chart_set = [ { name = "replicas" value = 3 }, { name = "ingress.ingressClassName" value = "nginx-external" }, { name = "ingress.tls.source" value = "rancher" }, # There is a bug with the uninstallation of Rancher due to missing priorityClassName of rancher-webhook # The priorityClassName need to be set # See https://github.com/rancher/rancher/issues/40935 { name = "priorityClassName" value = "system-node-critical" } ] } description = "Rancher Helm chart properties." }
  • Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow
    1 project | /r/Futurology | 22 Mar 2023
    When I searched DuckDuckGo instead, the 12th link actually had the real answer. It's in this issue on Rancher's GitHub. Turns out the Rancher admin needs to be in all of the Keycloak groups they want to have show up in the auto-populated picklist in Rancher. Being a Keycloak admin and even creating the groups isn't good enough. Frustratingly, the "caveat" note the Rancher guy is pointing to that says this is only present in the guide to setting up Keycloak for SAML, but apparently this is also true for OIDC.
  • How to enable TLS 1.3 protocol
    1 project | /r/networking | 14 Mar 2023
    Explicitly set TLS 1.3 in Rancher, though it could be a bug in Rancher: https://github.com/rancher/rancher/issues/35654
  • Rancher deployment, hanging on login and setup pages
    1 project | /r/rancher | 23 Feb 2023
    Thanks. Yeah looks like this might work: https://github.com/rancher/rancher/releases/tag/v2.7.2-rc3

longhorn

Posts with mentions or reviews of longhorn. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-11-15.
  • Diskomator – NVMe-TCP at your fingertips
    3 projects | news.ycombinator.com | 15 Nov 2023
    I'm looking forward to Longhorn[1] taking advantage of this technology.

    [1]: https://github.com/longhorn/longhorn

  • K3s – Lightweight Kubernetes
    17 projects | news.ycombinator.com | 11 Oct 2023
    I've been using a 3 nuc (actually Ryzen devices) k3s on SuSE MicroOS https://microos.opensuse.org/ for my homelab for a while, and I really like it. They made some really nice decisions on which parts of k8s to trim down and which Networking / LB / Ingress to use.

    The option to use sqlite in place of etcd on an even lighter single node setup makes it super interesting for even lighter weight homelab container environment setups.

    I even use it with Longhorn https://longhorn.io/ for shared block storage on the mini cluster.

    If anyone uses it with MicroOS, just make sure you switch to kured https://kured.dev/ for the transactional-updates reboot method.

    I'd love to compare it against Talos https://www.talos.dev/ but their lack of support for a persistent storage partition (only separate storage device) really hurts most small home / office usage I'd want to try.

  • Difference between snapshot-cleanup and snapshot-delete in Longhorn recurring job?
    1 project | /r/rancher | 26 Sep 2023
    Hi,i was wondering the same. Found more information here in this document: https://github.com/longhorn/longhorn/blob/v1.5.x/enhancements/20230103-recurring-snapshot-cleanup.md
  • The Next Gen Database Servers Powering Let's Encrypt(2021)
    5 projects | news.ycombinator.com | 17 Sep 2023
    Like most people on r/homelab, it started out with Plex. Rough timeline/services below:

    0. Got a Synology DS413 with 4x WD Red 3TB drives. Use Playstation Media Server to stream videos from it. Eventually find some Busybox stuff to add various functionality to the NAS, but it had a habit of undoing them periodically, which was frustrating. I also experienced my first and (knock on wood) only drive failure during this time, which concluded without fanfare once the faulty drive was replaced, and the array repaired itself.

    1. While teaching self Python as an Electrical Distribution Engineer at a utility, I befriended the IT head, who gave me an ancient (I think Nehalem? Quad-core Xeon) Dell T310. Promptly got more drives, totaling 7, and tried various OS / NAS platforms. I had OpenMediaVault for a while, but got tired of the UI fighting me when I knew how to do things in shell, so I switched to Debian (which it's based on anyway). Moved to MergerFS [0] + SnapRAID [1] for storage management, and Plex for media. I was also tinkering with various Linux stuff on it constantly.

    1.1 Got tired of my tinkering breaking things and requiring troubleshooting/fixing (in retrospect, this provided excellent learning), so I installed Proxmox, reinstalled Debian, and made a golden image with everything set up as desired so I could easily revert.

    1.2 A friend told me about Docker. I promptly moved Plex over to it, and probably around this time also got the *Arr Stack [2] going.

    2. Got a Supermicro X9DRi-LN4F+ in a 2U chassis w/ 12x 3.5" bays. Got faster/bigger CPUs (E5-2680v2), more RAM, more drives, etc. Shifted container management to Docker Compose. Modded the BIOS to allow it to boot from a NVMe drive on a PCIe adapter.

    2.1 Shifted to ZFS on Debian. Other than DKMS occasionally losing its mind during kernel upgrades, this worked well.

    2.2 Forked [3] some [4] Packer/Ansible projects to suit my needs, made a VM for everything. NAS, Dev, Webserver, Docker host, etc. Other than outgrowing (IMO) MergerFS/SnapRAID, honestly at this point I could have easily stopped, and could to this day revert back to this setup. It was dead reliable and worked extremely well. IIRC I was also playing with Terraform at this time.

    2.3 Successfully broke into tech (Associate SRE) as a mid-career shift, due largely (according to the hiring manager) to what I had done with my homelab. Hooray for hobbies paying off.

    3. Got a single Dell R620. I think the idea was to install either pfSense or VyOS on it, but that never came to fruition. Networking was from a Unifi USG (their tiny router + firewall + switch) and 8-port switch, with some AC Pro APs.

    4. Got two more R620s. Kubernetes all the things. Each one runs Proxmox in a 3-node cluster with two VMs - a control plane, and worker.

    4.0.1 Perhaps worth noting here that I thoroughly tested my migration plan via spinning up some VMs in, IIRC, Digital Ocean that mimicked my home setup. I successfully ran it twice, which was good enough for me.

    4.1 Played with Ceph via Rook, but a. disliked (and still to this day) running storage for everything out of K8s b. kept getting clock skew between nodes. Someone on Reddit mentioned it was my low-power C-state settings, but since that was saving me something like ~50 watts/node, I didn't want to deal with the higher power/heat. I landed on Longhorn [5] for cluster storage (i.e. anything that wasn't being handled by the ZFS pool), which was fine, but slow. SATA SSDs (used Intel enterprise drives with PLP, if you're wondering) over GBe aren't super fast, but they should be able to exceed 30 MBps.

    4.1.1 Again, worth noting that I spent literally a week poring over every bit of Ceph documentation I could find, from the Red Hat stuff to random Wikis and blog posts. It's not something you just jump into, IMO, and most of the horror stories I read boiled down to "you didn't follow the recommended practices."

    5. Got a newer Supermicro, an X11SSH-F, thinking that it would save power consumption over the older dual-socket I had for the NAS. It turned out to not make a big difference. For some reason I don't recall, I had a second X9DRi-LN4F+ mobo, so I sold the other one with the faster CPUs, bought some cheaper CPUs for the other one, and bought more drives for it. It's now a backup target that boots up daily to ingest ZFS snapshots. I have 100% on-site backups for everything. Important things (i.e. anything that I can't get from a torrent) are also off-site.

    6. Got some Samsung PM863 NVMe SSDs mounted on PCIe adapters for the Dells, and set up Ceph, but this time handled by Proxmox. It's dead easy, and for whatever reason isn't troubled by the same clock skew issues as I had previously. Still in the process of shifting cluster storage from Longhorn, but I have been successfully using Ceph block storage as fast (1 GBe, anyway - a 10G switch is on the horizon) storage for databases.

    So specifically, you asked what I do with the hardware. What I do, as far as my family is concerned, is block ads and serve media. On a more useful level, I try things out related to my job, most recently database-related (I moved from SRE to DBRE a year ago). I have MySQL and Postgres running, and am constantly playing with them. Can you actually do a live buffer pool resize in MySQL? (yes) Is XFS actually faster than ext4 for large DROP TABLE operations? (yes, but not by much) Is it faster to shut down a MySQL server and roll back to a previous ZFS snapshot than to rollback a big transaction? (often yes, although obviously a full shutdown has its own problems) Does Postgres suffer from the same write performance issue as MySQL with random PKs like UUIDv4, despite not clustering by default? (yes, but not to the same extent - still enough to matter, and you should use UUIDv7 if you absolutely need them)

    I legitimately love this stuff. I could quite easily make do without a fancy enclosed rack and multiple servers, but I like them, so I have them. The fact that it tends to help my professional growth out at the same time is a bonus.

    [0]: https://github.com/trapexit/mergerfs

    [1]: https://www.snapraid.it

    [2]: https://wiki.servarr.com

    [3]: https://github.com/stephanGarland/packer-proxmox-templates

    [4]: https://github.com/stephanGarland/ansible-initial-server

    [5]: https://longhorn.io

  • Ask HN: Any of you run Kubernetes clusters in-house?
    1 project | news.ycombinator.com | 2 Sep 2023
    Been running k3s for personal projects etc for some time now on a cluster of raspberry pies. Why pies? Were cheap at the time and wanted to play with arm. I don’t think I would suggest them right now. Nucs will be much better value for money.

    Some notes:

    Using helm and helmfile https://github.com/helmfile/helmfile for deployments. Seems to work pretty nicely and is pretty flexible.

    As I’m using a consumer internet provider ingress is done through cloudflare tunnels https://github.com/cloudflare/cloudflare-ingress-controller in order to not have to deal with ip changes and not have to expose ports.

    Persistent volumes were my main issue when previously attempting this, and what changed everything for me was longhorn. https://longhorn.io Make sure to backup your volumes.

    Really hyped for https://docs.computeblade.com/ xD

  • Container redundancy with multiple Unraid servers?
    1 project | /r/unRAID | 6 Jun 2023
    But, if you are really wanting high availability, then roll a kubernetes cluster, and run clustered storage such as longhorn.io, or rook/ceph.
  • I created UltimateHomeServer - A K3s based all-in-one home server solution
    8 projects | /r/selfhosted | 28 May 2023
  • What alternatives are there to Longhorn?
    3 projects | /r/kubernetes | 15 May 2023
    I was mainly referring to this one https://github.com/longhorn/longhorn/discussions/5931 but yeah I peeked into that one too. I'm not at my computer at the moment, how do I provide a support bundle?
  • How do I clean up a Longhorn volume? Trimming the volume doesn't work, "cannot find a valid mountpoint for volume"
    1 project | /r/kubernetes | 26 Apr 2023
    If it's RWX, Longhorn 1.5.0 will support that. https://github.com/longhorn/longhorn/issues/5143
  • Setting Up Kubernetes Cluster with K3S
    3 projects | dev.to | 18 Apr 2023
    You have now finally deployed an enterprise-grade Kubernetes cluster with k3s. You can now deploy some work on this cluster. Some components to take note of are for ingress, you already have Traefik installed, longhorn will handle storage and Containerd as the container runtime engine.

What are some alternatives?

When comparing rancher and longhorn you can also consider the following projects:

podman - Podman: A tool for managing OCI containers and pods.

rook - Storage Orchestration for Kubernetes

lens - Lens - The way the world runs Kubernetes

nfs-subdir-external-provisioner - Dynamic sub-dir volume provisioner on a remote NFS server.

microk8s - MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.

zfs-localpv - Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend ZFS data storage stack.

kubesphere - The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management ⎈ 🖥 ☁️

postgres-operator - Production PostgreSQL for Kubernetes, from high availability Postgres clusters to full-scale database-as-a-service.

cluster-api - Home for Cluster API, a subproject of sig-cluster-lifecycle

harvester - Open source hyperconverged infrastructure (HCI) software

kubespray - Deploy a Production Ready Kubernetes Cluster

nfs-ganesha-server-and-external-provisioner - NFS Ganesha Server and Volume Provisioner.