Ceph
OpenAFS
Our great sponsors
Ceph | OpenAFS | |
---|---|---|
34 | 4 | |
13,197 | 76 | |
1.6% | - | |
10.0 | 8.2 | |
7 days ago | 7 days ago | |
C++ | C | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Ceph
-
First time user sturggles
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadmchmod a+x cephadm./cephadm bootstrap --mon-ip 192.168.1.41
- How to retrieve bluestore performance data
- Problem with building/starting downloaded projects
-
4+1 Node Ceph Stretch Cluster - Question about HDD's with 2x replication for media
replicated_rule is what came out of the box stretch_rule comes from ceph.io or that link above or some combination.. dc_mirror_rule is intended for 2x replication pools where I don't really care about the data # ... rules rule replicated_rule { id 0 type replicated step take default step chooseleaf firstn 0 type host step emit }
-
ATARI is still alive: Atari Partition of Fear
Ceph: A open source distributed storage system
- The Coroutines Conundrum: Why Writing Unit Tests for ASIO and P2300 Proposals is a Pain, and How We Can Fix It
-
I'm looking for latest howto for ceph command line completion setup for bash/zsh: `ceph...`, `radosgw-admin...`, other useful ones, etc.
EDIT: Right after I posted that I realized those files must be maintained somewhere. So ingore me suggesting a hard option below, just follow this link: https://github.com/ceph/ceph/tree/main/src/bash_completion
-
Proxmox cluster traffic over wifi, ceph over wired?
Software defined storage via fucking wifi? ? ???????????
-
How many HDDs is too many for a pool of mirrors? When is RAID Z2 a better option?
Have you considered using the ceph file system?
-
NAS on a cluster
Can OpenMediaVault run on multiple machines but present each machine's storage space as a single drive? I know that ceph.io can do this but I'm struggling with ceph.
OpenAFS
-
Me at an ASCII party
At least you have job security as long as that’s used. A buddy was an an OpenAFS expert and supported it for IBM until the USPS stopped using it.
- OpenAFS – An Open Source Distributed Filesystem
-
Outrun: Execute local command using processing power of another Linux machine
https://www.openafs.org/
But I never did get around to play much with either.
Maybe it's time for someone to build another system on top of foundationdb?
- Classic dilemma: function pointers array or giant switch?
What are some alternatives?
Seaweed File System - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding. [Moved to: https://github.com/seaweedfs/seaweedfs]
GlusterFS - Web Content for gluster.org -- Deprecated as of September 2017
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
GlusterFS - Gluster Filesystem : Build your distributed storage in minutes
Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]
Apache Hadoop - Apache Hadoop
XtreemFS - Distributed Fault-Tolerant File System
lizardfs - LizardFS is an Open Source Distributed File System licensed under GPLv3.
SheepDog - Distributed Storage System for QEMU
seaweedfs - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.
rozofs - Scale-out storage using erasure coding