blog | Ceph | |
---|---|---|
1 | 34 | |
0 | 13,259 | |
- | 1.1% | |
10.0 | 10.0 | |
over 3 years ago | 5 days ago | |
C++ | ||
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
blog
Ceph
-
First time user sturggles
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadmchmod a+x cephadm./cephadm bootstrap --mon-ip 192.168.1.41
- How to retrieve bluestore performance data
- Problem with building/starting downloaded projects
-
4+1 Node Ceph Stretch Cluster - Question about HDD's with 2x replication for media
replicated_rule is what came out of the box stretch_rule comes from ceph.io or that link above or some combination.. dc_mirror_rule is intended for 2x replication pools where I don't really care about the data # ... rules rule replicated_rule { id 0 type replicated step take default step chooseleaf firstn 0 type host step emit }
-
ATARI is still alive: Atari Partition of Fear
Ceph: A open source distributed storage system
- The Coroutines Conundrum: Why Writing Unit Tests for ASIO and P2300 Proposals is a Pain, and How We Can Fix It
-
I'm looking for latest howto for ceph command line completion setup for bash/zsh: `ceph...`, `radosgw-admin...`, other useful ones, etc.
EDIT: Right after I posted that I realized those files must be maintained somewhere. So ingore me suggesting a hard option below, just follow this link: https://github.com/ceph/ceph/tree/main/src/bash_completion
-
Proxmox cluster traffic over wifi, ceph over wired?
Software defined storage via fucking wifi? ? ???????????
-
How many HDDs is too many for a pool of mirrors? When is RAID Z2 a better option?
Have you considered using the ceph file system?
-
NAS on a cluster
Can OpenMediaVault run on multiple machines but present each machine's storage space as a single drive? I know that ceph.io can do this but I'm struggling with ceph.
What are some alternatives?
Seaweed File System - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding. [Moved to: https://github.com/seaweedfs/seaweedfs]
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]
Apache Hadoop - Apache Hadoop
lizardfs - LizardFS is an Open Source Distributed File System licensed under GPLv3.
seaweedfs - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.
LeoFS - The LeoFS Storage System
OpenAFS - Fork of OpenAFS from git.openafs.org for visualization
XtreemFS - Distributed Fault-Tolerant File System
SheepDog - Distributed Storage System for QEMU
rozofs - Scale-out storage using erasure coding
Tahoe-LAFS - The Tahoe-LAFS decentralized secure filesystem.