OpenAFS
Seaweed File System
DISCONTINUED
Our great sponsors
OpenAFS | Seaweed File System | |
---|---|---|
4 | 49 | |
75 | 14,960 | |
- | - | |
8.1 | 9.9 | |
5 days ago | over 1 year ago | |
C | Go | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
OpenAFS
-
Outrun: Execute local command using processing power of another Linux machine
https://www.openafs.org/
But I never did get around to play much with either.
Maybe it's time for someone to build another system on top of foundationdb?
- Classic dilemma: function pointers array or giant switch?
Seaweed File System
- An open-source distributed object storage service
-
Moving to github.com/seaweedfs/seaweedfs
FYI: Planning to move from github.com/chrislusf/seaweedfs to github.com/seaweedfs/seaweedfs in the coming days. It may cause some problem for package reference, building, documents, and links. Sorry for the change!
-
S3 Isn't Getting Cheaper
Besides storage itself, S3 API access cost can be high if frequently accessed. And latency is unpredicatble.
You can use SeaweedFS Remote Object Store Gateway to cache S3 (or any S3 API compatible vendors) to local servers, and access them at local network speed, and asynchronously sync back to S3.
https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remot...
SeaweedFS: https://github.com/chrislusf/seaweedfs
-
Question: does anyone know Storage Provider with S3 as persistence layer?
I don't know if it fits all of your requests, but you can take a look at seaweedfs, which is pretty good
-
Introducing Garage, our self-hosted distributed object storage solution
Seaweedfs deserves a mention here for comparison as well.
-
Garage, our self-hosted distributed object storage solution
If you're still talking about SeaweedFS, the answer seems to simply be that it's not a "raft-based object store" as the parent described. That 'proxy' node you mention is a volume server itself, and replicates it's whole volume on another server. Upon replication failures, the data becomes read-only [1]. Raft is not used for the writes.
-
Updated MinIO NVMe Benchmarks: 2.6Tpbs on Get and 1.6 on Put
For computers, batch IO operations are much faster than random IO and can easily saturate the network.
This benchmark uses large batch size, 64MB, to test. There is nothing new here. Most common file systems can easily do the same.
The difficult task is to read and write lots of small files. There is a term for it, LOSF. I work on SeaweedFS, https://github.com/chrislusf/seaweedfs , which is designed to handle LOSF. And of course, no problem with large files at all.
This is a fair complaint. :)
For filer metadata, you should just pick the one you are most familiar with.
There is a wiki page for production setup. https://github.com/chrislusf/seaweedfs/wiki/Production-Setup
What are some alternatives?
minio - The Object Store for AI Data Infrastructure
Ceph - Ceph is a distributed object, block, and file storage platform
GlusterFS - Web Content for gluster.org -- Deprecated as of September 2017
Apache Hadoop - Apache Hadoop
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
lizardfs - LizardFS is an Open Source Distributed File System licensed under GPLv3.
Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]
Docker - Notary is a project that allows anyone to have trust over arbitrary collections of data
autotier - A passthrough FUSE filesystem that intelligently moves files between storage tiers based on frequency of use, file age, and tier fullness.
Alluxio (formerly Tachyon) - Alluxio, data orchestration for analytics and machine learning in the cloud