rook VS longhorn

Compare rook vs longhorn and see what are their differences.

longhorn

Cloud-Native distributed storage built on and for Kubernetes (by longhorn)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
rook longhorn
51 77
11,832 5,487
1.1% 3.5%
9.9 9.4
6 days ago 3 days ago
Go Shell
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

rook

Posts with mentions or reviews of rook. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-19.
  • Ceph: A Journey to 1 TiB/s
    2 projects | news.ycombinator.com | 19 Jan 2024
    I have some experience with Ceph, both for work, and with homelab-y stuff.

    First, bear in mind that Ceph is a distributed storage system - so the idea is that you will have multiple nodes.

    For learning, you can definitely virtualise it all on a single box - but you'll have a better time with discrete physical machines.

    Also, Ceph does prefer physical access to disks (similar to ZFS).

    And you do need decent networking connectivity - I think that's the main thing people think of, when they think of high hardware requirements for Ceph. Ideally 10Gbe at the minimum - although more if you want higher performance - there can be a lot of network traffic, particularly with things like backfill. (25Gbps if you can find that gear cheap for homelab - 50Gbps is a technological dead-end. 100Gbps works well).

    But honestly, for a homelab, a cheap mini PC or NUC with 10Gbe will work fine, and you should get acceptable performance, and it'll be good for learning.

    You can install Ceph directly on bare-metal, or if you want to do the homelab k8s route, you can use Rook (https://rook.io/).

    Hope this helps, and good luck! Let me know if you have any other questions.

  • Running stateful workloads on Kubernetes with Rook Ceph
    4 projects | dev.to | 26 Dec 2023
    Another option is to leverage a Kubernetes-native distributed storage solution such as Rook Ceph as the storage backend for stateful components running on Kubernetes. This has the benefit of simplifying application configuration while addressing business requirements for data backup and recovery such as the ability to take volume snapshots at a regular interval and perform application-level data recovery in case of a disaster.
  • Want advice on planned evolution: k3os/Longhorn --> Talos/Ceph, plus Consul and Vault
    6 projects | /r/homelab | 15 Apr 2023
    I've briefly run ceph in an external mode, you can actually use a rook deployment to manage it (sort of). Here is the documentation for doing that. For me it didn't pass my testing phase because I need better networking equipment before I can try that.
  • ATARI is still alive: Atari Partition of Fear
    2 projects | dev.to | 28 Mar 2023
    This article explains the data corruption issue happened in Rook in 2021. The root cause lies in an unexpected place and can also occurs in all Ceph environment. It's interesting that Rook had started to encounter this problem recently even though this problem has existed for a long time. It's due to a series of coincidences. I wrote this article because the word "Atari" used in a non-historical context in 2021.
  • How to Deploy and Scale Strapi on a Kubernetes Cluster 2/2
    18 projects | dev.to | 3 Feb 2023
    Rook (this is a nice article for Rook NFS)
  • Running on-premise k8s with a small team: possible or potential nightmare?
    5 projects | /r/kubernetes | 4 Jan 2023
    Storage: Favor any distributed storage you know to start with for Persistent Volumes: Ceph maybe via rook.io, Longhorn if you go rancher etc
  • My completely automated Homelab featuring Kubernetes
    10 projects | /r/homelab | 3 Jan 2023
    I've dealt with a lot of issues that are very close to just unplugging a node. Unfortunately on node lost, my stateful workloads using rook-ceph block storage won't migrate over to another node automatically due to an issue with rook. Stateless apps (ingress nginx, etc..) not using rook-ceph block failover to another node just fine. I've kind of accepted this for now and I know Longhorn has a feature that makes this work but I find rook-ceph to be more stable for my workloads.
  • [HELP] PXE Boot without data loss
    3 projects | /r/linuxadmin | 4 Dec 2022
    Third, it sounds like you're building a cluster. For this you'll either want a central file server. Or better, setup a distributed storage system. For example a Ceph cluster managed by Rook. This way you can fully wipe a single node and the system will be able to recover/replicate thed data.
  • SaaS Deployment Options
    4 projects | news.ycombinator.com | 12 Nov 2022
  • For those managing k8s clusters, are you using Rook + Ceph?
    2 projects | /r/devops | 1 Sep 2022
    I just helped write a quick summary of just why you can trust your persistent workloads to Ceph, managed by Rook and it occurred to me that... I'm probably wrong.

longhorn

Posts with mentions or reviews of longhorn. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-11-15.
  • Diskomator – NVMe-TCP at your fingertips
    3 projects | news.ycombinator.com | 15 Nov 2023
    I'm looking forward to Longhorn[1] taking advantage of this technology.

    [1]: https://github.com/longhorn/longhorn

  • K3s – Lightweight Kubernetes
    17 projects | news.ycombinator.com | 11 Oct 2023
    I've been using a 3 nuc (actually Ryzen devices) k3s on SuSE MicroOS https://microos.opensuse.org/ for my homelab for a while, and I really like it. They made some really nice decisions on which parts of k8s to trim down and which Networking / LB / Ingress to use.

    The option to use sqlite in place of etcd on an even lighter single node setup makes it super interesting for even lighter weight homelab container environment setups.

    I even use it with Longhorn https://longhorn.io/ for shared block storage on the mini cluster.

    If anyone uses it with MicroOS, just make sure you switch to kured https://kured.dev/ for the transactional-updates reboot method.

    I'd love to compare it against Talos https://www.talos.dev/ but their lack of support for a persistent storage partition (only separate storage device) really hurts most small home / office usage I'd want to try.

  • The Next Gen Database Servers Powering Let's Encrypt(2021)
    5 projects | news.ycombinator.com | 17 Sep 2023
    Like most people on r/homelab, it started out with Plex. Rough timeline/services below:

    0. Got a Synology DS413 with 4x WD Red 3TB drives. Use Playstation Media Server to stream videos from it. Eventually find some Busybox stuff to add various functionality to the NAS, but it had a habit of undoing them periodically, which was frustrating. I also experienced my first and (knock on wood) only drive failure during this time, which concluded without fanfare once the faulty drive was replaced, and the array repaired itself.

    1. While teaching self Python as an Electrical Distribution Engineer at a utility, I befriended the IT head, who gave me an ancient (I think Nehalem? Quad-core Xeon) Dell T310. Promptly got more drives, totaling 7, and tried various OS / NAS platforms. I had OpenMediaVault for a while, but got tired of the UI fighting me when I knew how to do things in shell, so I switched to Debian (which it's based on anyway). Moved to MergerFS [0] + SnapRAID [1] for storage management, and Plex for media. I was also tinkering with various Linux stuff on it constantly.

    1.1 Got tired of my tinkering breaking things and requiring troubleshooting/fixing (in retrospect, this provided excellent learning), so I installed Proxmox, reinstalled Debian, and made a golden image with everything set up as desired so I could easily revert.

    1.2 A friend told me about Docker. I promptly moved Plex over to it, and probably around this time also got the *Arr Stack [2] going.

    2. Got a Supermicro X9DRi-LN4F+ in a 2U chassis w/ 12x 3.5" bays. Got faster/bigger CPUs (E5-2680v2), more RAM, more drives, etc. Shifted container management to Docker Compose. Modded the BIOS to allow it to boot from a NVMe drive on a PCIe adapter.

    2.1 Shifted to ZFS on Debian. Other than DKMS occasionally losing its mind during kernel upgrades, this worked well.

    2.2 Forked [3] some [4] Packer/Ansible projects to suit my needs, made a VM for everything. NAS, Dev, Webserver, Docker host, etc. Other than outgrowing (IMO) MergerFS/SnapRAID, honestly at this point I could have easily stopped, and could to this day revert back to this setup. It was dead reliable and worked extremely well. IIRC I was also playing with Terraform at this time.

    2.3 Successfully broke into tech (Associate SRE) as a mid-career shift, due largely (according to the hiring manager) to what I had done with my homelab. Hooray for hobbies paying off.

    3. Got a single Dell R620. I think the idea was to install either pfSense or VyOS on it, but that never came to fruition. Networking was from a Unifi USG (their tiny router + firewall + switch) and 8-port switch, with some AC Pro APs.

    4. Got two more R620s. Kubernetes all the things. Each one runs Proxmox in a 3-node cluster with two VMs - a control plane, and worker.

    4.0.1 Perhaps worth noting here that I thoroughly tested my migration plan via spinning up some VMs in, IIRC, Digital Ocean that mimicked my home setup. I successfully ran it twice, which was good enough for me.

    4.1 Played with Ceph via Rook, but a. disliked (and still to this day) running storage for everything out of K8s b. kept getting clock skew between nodes. Someone on Reddit mentioned it was my low-power C-state settings, but since that was saving me something like ~50 watts/node, I didn't want to deal with the higher power/heat. I landed on Longhorn [5] for cluster storage (i.e. anything that wasn't being handled by the ZFS pool), which was fine, but slow. SATA SSDs (used Intel enterprise drives with PLP, if you're wondering) over GBe aren't super fast, but they should be able to exceed 30 MBps.

    4.1.1 Again, worth noting that I spent literally a week poring over every bit of Ceph documentation I could find, from the Red Hat stuff to random Wikis and blog posts. It's not something you just jump into, IMO, and most of the horror stories I read boiled down to "you didn't follow the recommended practices."

    5. Got a newer Supermicro, an X11SSH-F, thinking that it would save power consumption over the older dual-socket I had for the NAS. It turned out to not make a big difference. For some reason I don't recall, I had a second X9DRi-LN4F+ mobo, so I sold the other one with the faster CPUs, bought some cheaper CPUs for the other one, and bought more drives for it. It's now a backup target that boots up daily to ingest ZFS snapshots. I have 100% on-site backups for everything. Important things (i.e. anything that I can't get from a torrent) are also off-site.

    6. Got some Samsung PM863 NVMe SSDs mounted on PCIe adapters for the Dells, and set up Ceph, but this time handled by Proxmox. It's dead easy, and for whatever reason isn't troubled by the same clock skew issues as I had previously. Still in the process of shifting cluster storage from Longhorn, but I have been successfully using Ceph block storage as fast (1 GBe, anyway - a 10G switch is on the horizon) storage for databases.

    So specifically, you asked what I do with the hardware. What I do, as far as my family is concerned, is block ads and serve media. On a more useful level, I try things out related to my job, most recently database-related (I moved from SRE to DBRE a year ago). I have MySQL and Postgres running, and am constantly playing with them. Can you actually do a live buffer pool resize in MySQL? (yes) Is XFS actually faster than ext4 for large DROP TABLE operations? (yes, but not by much) Is it faster to shut down a MySQL server and roll back to a previous ZFS snapshot than to rollback a big transaction? (often yes, although obviously a full shutdown has its own problems) Does Postgres suffer from the same write performance issue as MySQL with random PKs like UUIDv4, despite not clustering by default? (yes, but not to the same extent - still enough to matter, and you should use UUIDv7 if you absolutely need them)

    I legitimately love this stuff. I could quite easily make do without a fancy enclosed rack and multiple servers, but I like them, so I have them. The fact that it tends to help my professional growth out at the same time is a bonus.

    [0]: https://github.com/trapexit/mergerfs

    [1]: https://www.snapraid.it

    [2]: https://wiki.servarr.com

    [3]: https://github.com/stephanGarland/packer-proxmox-templates

    [4]: https://github.com/stephanGarland/ansible-initial-server

    [5]: https://longhorn.io

  • I created UltimateHomeServer - A K3s based all-in-one home server solution
    8 projects | /r/selfhosted | 28 May 2023
  • What alternatives are there to Longhorn?
    3 projects | /r/kubernetes | 15 May 2023
    I was mainly referring to this one https://github.com/longhorn/longhorn/discussions/5931 but yeah I peeked into that one too. I'm not at my computer at the moment, how do I provide a support bundle?
    3 projects | /r/kubernetes | 15 May 2023
    What backup store you were using? S3 or NFS? Have you tried to report your issues to https://github.com/longhorn/longhorn? longhorn maintainers/contributors will definitely help with any issues reported by the community.
  • Setting Up Kubernetes Cluster with K3S
    3 projects | dev.to | 18 Apr 2023
    You have now finally deployed an enterprise-grade Kubernetes cluster with k3s. You can now deploy some work on this cluster. Some components to take note of are for ingress, you already have Traefik installed, longhorn will handle storage and Containerd as the container runtime engine.
  • Help me What to Choose?
    5 projects | /r/kubernetes | 16 Mar 2023
    I tried longhorn but it runs terrible on mechanical drives, not sure if you plan on using SSDS though.
  • single node k8s on nuc - homelab/prod - storage question
    2 projects | /r/homelab | 11 Mar 2023
    You could also look into Longhorn, which is replicated storage.
  • Low power ceph setup?
    2 projects | /r/homelab | 20 Feb 2023
    I am running a homelab Kubernetes cluster on a few RPi clones, Libre Computer Renegade. I discovered Rook and Ceph were way too heavyweight for the cluster, and instead ended up using Longhorn instead. It works great!

What are some alternatives?

When comparing rook and longhorn you can also consider the following projects:

nfs-subdir-external-provisioner - Dynamic sub-dir volume provisioner on a remote NFS server.

zfs-localpv - CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using ZFS.

ceph-csi - CSI driver for Ceph

postgres-operator - Production PostgreSQL for Kubernetes, from high availability Postgres clusters to full-scale database-as-a-service.

harvester - Open source hyperconverged infrastructure (HCI) software

velero - Backup and migrate Kubernetes applications and their persistent volumes

nfs-ganesha-server-and-external-provisioner - NFS Ganesha Server and Volume Provisioner.

k3sup - bootstrap K3s over SSH in < 60s 🚀

k3s - Lightweight Kubernetes

kube-plex - Scalable Plex Media Server on Kubernetes -- dispatch transcode jobs as pods on your cluster!

loki - Like Prometheus, but for logs.

postgres-operator - Postgres operator creates and manages PostgreSQL clusters running in Kubernetes