lizardfs
Ceph
Our great sponsors
lizardfs | Ceph | |
---|---|---|
4 | 34 | |
944 | 13,197 | |
0.2% | 1.4% | |
3.3 | 10.0 | |
7 months ago | about 2 hours ago | |
C++ | C++ | |
GNU General Public License v3.0 only | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lizardfs
- Distributed Network File System
-
cloud storage "merged" on multiple VPSes
Have a look at https://github.com/lizardfs/lizardfs perhaps is what you want
- ZFS fans, rejoice—RAIDz expansion will be a thing very soon
-
Had to add a second sata card and upgrade to a 1600 watt power supply because spinning up 17 drives was too much for my poor 900 watt...
That's a lotta eggs to put in one basket. I started using LizardFS for mass storage and it basically allows me to grow/shrink easily. https://lizardfs.com/
Ceph
-
First time user sturggles
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadmchmod a+x cephadm./cephadm bootstrap --mon-ip 192.168.1.41
- How to retrieve bluestore performance data
- Problem with building/starting downloaded projects
-
4+1 Node Ceph Stretch Cluster - Question about HDD's with 2x replication for media
replicated_rule is what came out of the box stretch_rule comes from ceph.io or that link above or some combination.. dc_mirror_rule is intended for 2x replication pools where I don't really care about the data # ... rules rule replicated_rule { id 0 type replicated step take default step chooseleaf firstn 0 type host step emit }
-
ATARI is still alive: Atari Partition of Fear
Ceph: A open source distributed storage system
- The Coroutines Conundrum: Why Writing Unit Tests for ASIO and P2300 Proposals is a Pain, and How We Can Fix It
-
I'm looking for latest howto for ceph command line completion setup for bash/zsh: `ceph...`, `radosgw-admin...`, other useful ones, etc.
EDIT: Right after I posted that I realized those files must be maintained somewhere. So ingore me suggesting a hard option below, just follow this link: https://github.com/ceph/ceph/tree/main/src/bash_completion
-
Proxmox cluster traffic over wifi, ceph over wired?
Software defined storage via fucking wifi? ? ???????????
-
How many HDDs is too many for a pool of mirrors? When is RAID Z2 a better option?
Have you considered using the ceph file system?
-
NAS on a cluster
Can OpenMediaVault run on multiple machines but present each machine's storage space as a single drive? I know that ceph.io can do this but I'm struggling with ceph.
What are some alternatives?
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
Seaweed File System - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding. [Moved to: https://github.com/seaweedfs/seaweedfs]
GlusterFS - Gluster Filesystem : Build your distributed storage in minutes
Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]
Apache Hadoop - Apache Hadoop
rozofs - Scale-out storage using erasure coding
seaweedfs - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.
LeoFS - The LeoFS Storage System