LeoFS
Ceph
LeoFS | Ceph | |
---|---|---|
2 | 34 | |
1,536 | 13,233 | |
0.0% | 0.9% | |
0.0 | 10.0 | |
almost 4 years ago | 7 days ago | |
Erlang | C++ | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LeoFS
- Leofs – S3 / NFS object store
-
Ask HN: How would you store 10PB of data for your startup today?
I think if I _had_ to decide (I'm not the best informed person on the matter) I'd lean towards leofs[1].
I only read about it, but never used it.
It advertises itself as exabyte scalable and provides s3 and nfs access.
[1] https://leo-project.net/leofs/
Ceph
-
First time user sturggles
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadmchmod a+x cephadm./cephadm bootstrap --mon-ip 192.168.1.41
- How to retrieve bluestore performance data
- Problem with building/starting downloaded projects
-
4+1 Node Ceph Stretch Cluster - Question about HDD's with 2x replication for media
replicated_rule is what came out of the box stretch_rule comes from ceph.io or that link above or some combination.. dc_mirror_rule is intended for 2x replication pools where I don't really care about the data # ... rules rule replicated_rule { id 0 type replicated step take default step chooseleaf firstn 0 type host step emit }
-
ATARI is still alive: Atari Partition of Fear
Ceph: A open source distributed storage system
- The Coroutines Conundrum: Why Writing Unit Tests for ASIO and P2300 Proposals is a Pain, and How We Can Fix It
-
I'm looking for latest howto for ceph command line completion setup for bash/zsh: `ceph...`, `radosgw-admin...`, other useful ones, etc.
EDIT: Right after I posted that I realized those files must be maintained somewhere. So ingore me suggesting a hard option below, just follow this link: https://github.com/ceph/ceph/tree/main/src/bash_completion
-
Proxmox cluster traffic over wifi, ceph over wired?
Software defined storage via fucking wifi? ? ???????????
-
How many HDDs is too many for a pool of mirrors? When is RAID Z2 a better option?
Have you considered using the ceph file system?
-
NAS on a cluster
Can OpenMediaVault run on multiple machines but present each machine's storage space as a single drive? I know that ceph.io can do this but I'm struggling with ceph.
What are some alternatives?
Seaweed File System - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding. [Moved to: https://github.com/seaweedfs/seaweedfs]
GlusterFS - Web Content for gluster.org -- Deprecated as of September 2017
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
lizardfs - LizardFS is an Open Source Distributed File System licensed under GPLv3.
Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]
Apache Hadoop - Apache Hadoop
Tahoe-LAFS - The Tahoe-LAFS decentralized secure filesystem.
seaweedfs - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.