seaweedfs VS GlusterFS

Compare seaweedfs vs GlusterFS and see what are their differences.

seaweedfs

SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding. (by seaweedfs)

GlusterFS

Web Content for gluster.org -- Deprecated as of September 2017 (by gluster)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
seaweedfs GlusterFS
33 0
20,796 12
7.9% -
9.9 0.0
6 days ago about 5 years ago
Go
Apache License 2.0 -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

seaweedfs

Posts with mentions or reviews of seaweedfs. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-08.
  • Billion File Filesystem
    2 projects | news.ycombinator.com | 8 Feb 2024
    If you want/need to take out the metadata, there's some nice solutions for that https://github.com/seaweedfs/seaweedfs
  • SeaweedFS fast distributed storage system for blobs, objects, files and datalake
    7 projects | news.ycombinator.com | 2 Feb 2024
    First, the feature set you have built is very impressive.

    I think SeaweedFS would really benefit from more documentation on what exactly it does.

    People who want to deploy production systems need that, and it would also help potential contributors.

    Some examples:

    * It says "optimised for small files", but it is not super clear from the whitepaper and other documentation what that means. It mostly talks about about how small the per-file overhad is, but that's not enough. For example, on Ceph I can also store 500M files without problem, but then later discover that some operations that happen only infrequently, such as recovery or scrubs, are O(files) and thus have O(files) many seeks, which can mean 2 months of seeks for a recovery of 500M files to finish. ("Recovery" here means when a replica fails and the data is copied to another replica.)

    * More on small files: Assuming small files are packed somehow to solve the seek problem, what happens if I delete some files in the middle of the pack? Do I get fragmentation (space wasted by holes)? If yes, is there a defragmentation routine?

    * One page https://github.com/seaweedfs/seaweedfs/wiki/Replication#writ... says "volumes are append only", which suggests that there will be fragmentation. But here I need to piece together info from different unrelated pages in order to answer a core question about how SeaweedFS works.

    * https://github.com/seaweedfs/seaweedfs/wiki/FAQ#why-files-ar... suggests that "vacuum" is the defragmentation process. It says it triggers automatically when deleted-space overhead reaches 30%. But what performance implications does a vacuum have, can it take long and block some data access? This would be the immediate next question any operator would have.

    * Scrubs and integrity: It is common for redundant-storage systems (md-RAID, ZFS, Ceph) to detect and recover from bitrot via checksums and cross-replica comparisons. This requires automatic regular inspections of the stored data ("scrubs"). For SeaweedFS, I can find no docs about it, only some Github issues (https://github.com/seaweedfs/seaweedfs/issues?q=scrub) that suggest that there is some script that runs every 17 minutes. But looking at that script, I can't find which command is doing the "repair" action. Note that just having checksums is not enough for preventing bitrot: It helps detect it, but does not guarantee that the target number of replicas is brought back up (as it may take years until you read some data again). For that, regular scrubs are needed.

    * Filers: For a production store of a highly-available POSIX FUSE mount I need to choose a suitable Filer backend. There's a useful page about these on https://github.com/seaweedfs/seaweedfs/wiki/Filer-Stores. But they are many, and information is limited to ~8 words per backend. To know how a backend will perform, I need to know both the backend well, and also how SeaweedFS will use it. I will also be subject to the workflows of that backend, e.g. running and upgrading a large HA Postgres is unfortunately not easy. As another example, Postgres itself also does not scale beyond a single machine, unless one uses something like Citus, and I have no info on whether SeaweedFS will work with that.

    * The word "Upgrades" seems generally un-mentioned in Wiki and README. How are forward and backward compatibility handled? Can I just switch SeaweedFS versions forward and backward and expect everything will automatically work? For Ceph there are usually detailed instructions on how one should upgrade a large cluster and its clients.

    In general the way this should be approached is: Pretend to know nothing about SeaweedFS, and imagine what a user that wants to use it in production wants to know, and what their followup questions would be.

    Some parts of that are partially answered in the presentations, but it is difficult to piece together how a software currently works from presentations of different ages (maybe they are already outdated?) and the presentations are also quite light on infos (usually only 1 slide per topic). I think the Github Wiki is a good way to do it, but it too, is too light on information and I'm not sure it has everything that's in the presentations.

    I understand the README already says "more tools and documentation", I just want to highlight how important the "what does it do and how does it behave" part of documentation is for software like this.

    7 projects | news.ycombinator.com | 2 Feb 2024
    7 projects | news.ycombinator.com | 2 Feb 2024
    This is an old project, I had a quick look and see that I submitted a pull-request back in 2015:

    https://github.com/seaweedfs/seaweedfs/pull/187

    7 projects | news.ycombinator.com | 2 Feb 2024
  • Show HN: OpenSign – The open source alternative to DocuSign
    7 projects | news.ycombinator.com | 28 Oct 2023
    > Theoretically they could swap with minio but last time we used it it was not a drop-in replacement yet.

    Depends on whether AGPL v3 works for you or not (or whether you decide to pay them), I guess: https://min.io/pricing

    I've actually been looking for more open alternatives, but haven't found much.

    Zenko CloudServer seemed to be somewhat promising, but doesn't seem to be managed very actively: https://github.com/scality/cloudserver/issues/4986 (their Docker images on DockerHub were last updated 10 months ago, which is what the homepage links to; blog doesn't seem active since 2019, forums don't have much going on, despite some action on GitHub still)

    There was also Garage, but that one is also AGPL v3: https://garagehq.deuxfleurs.fr/

    The closest I got was discovering that SeaweedFS has an S3 compatible mode: https://github.com/seaweedfs/seaweedfs

  • The Tailscale Universal Docker Mod
    22 projects | news.ycombinator.com | 8 Oct 2023
  • Google Cloud Storage FUSE
    17 projects | news.ycombinator.com | 2 May 2023
  • First Homelab as a 19yr old Software Developer
    2 projects | /r/homelab | 15 Apr 2023
    SeaweedFS S3 Gateway for Joplin notes
  • My Experience Self Hosting
    3 projects | /r/Supabase | 6 Apr 2023
    Supabase-Storage uses an S3 compatible API and is ultimately just middleware for it. So, the redundancy would be at the storage backend systems. Seems like the majority of s3 compatible selfhosted systems are built for redundancy/high-availability. With only a brief read of docs, and in no particular order: https://garagehq.deuxfleurs.fr/documentation/quick-start/ https://github.com/seaweedfs/seaweedfs CEPH can do it, but at that point you could probably just use the basic local filesystem storage container supabase provides, and out your VMs on CEPH

GlusterFS

Posts with mentions or reviews of GlusterFS. We have used some of these posts to build our list of alternatives and similar projects.

We haven't tracked posts mentioning GlusterFS yet.
Tracking mentions began in Dec 2020.

What are some alternatives?

When comparing seaweedfs and GlusterFS you can also consider the following projects:

minio - The Object Store for AI Data Infrastructure

Seaweed File System - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding. [Moved to: https://github.com/seaweedfs/seaweedfs]

MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)

Ceph - Ceph is a distributed object, block, and file storage platform

Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]

XtreemFS - Distributed Fault-Tolerant File System

Apache Hadoop - Apache Hadoop

garage - (Mirror) S3-compatible object store for small self-hosted geo-distributed deployments. Main repo: https://git.deuxfleurs.fr/Deuxfleurs/garage

cubefs - cloud-native file store

OpenAFS - Fork of OpenAFS from git.openafs.org for visualization

SheepDog - Distributed Storage System for QEMU