rook
Ceph
Our great sponsors
rook | Ceph | |
---|---|---|
51 | 34 | |
11,905 | 13,233 | |
1.2% | 1.9% | |
9.9 | 10.0 | |
6 days ago | about 9 hours ago | |
Go | C++ | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rook
-
Ceph: A Journey to 1 TiB/s
I have some experience with Ceph, both for work, and with homelab-y stuff.
First, bear in mind that Ceph is a distributed storage system - so the idea is that you will have multiple nodes.
For learning, you can definitely virtualise it all on a single box - but you'll have a better time with discrete physical machines.
Also, Ceph does prefer physical access to disks (similar to ZFS).
And you do need decent networking connectivity - I think that's the main thing people think of, when they think of high hardware requirements for Ceph. Ideally 10Gbe at the minimum - although more if you want higher performance - there can be a lot of network traffic, particularly with things like backfill. (25Gbps if you can find that gear cheap for homelab - 50Gbps is a technological dead-end. 100Gbps works well).
But honestly, for a homelab, a cheap mini PC or NUC with 10Gbe will work fine, and you should get acceptable performance, and it'll be good for learning.
You can install Ceph directly on bare-metal, or if you want to do the homelab k8s route, you can use Rook (https://rook.io/).
Hope this helps, and good luck! Let me know if you have any other questions.
-
Running stateful workloads on Kubernetes with Rook Ceph
Another option is to leverage a Kubernetes-native distributed storage solution such as Rook Ceph as the storage backend for stateful components running on Kubernetes. This has the benefit of simplifying application configuration while addressing business requirements for data backup and recovery such as the ability to take volume snapshots at a regular interval and perform application-level data recovery in case of a disaster.
-
People who run Nextcloud in Docker: Where do you store your data/files? In a Docker volume, or on a remote server/NAS?
This is beyond your question but might help someone else: I switch from docker-compose to kubernetes for my home lab a while ago. The storage solution I've settled on is Rook. It was a bit of up-front work learning how to get it up but now that it's done my storage is automatically managed by Ceph. I can swap out drives and Ceph basically takes care of everything itself.
-
Rook/Ceph with VM nodes on research cluster?
The stumbling point I am at is I want to use rook.io(Ceph) as my storage solution for the cluster. The Ceph prerequisites are one of the following:
-
Asking for recommendation on remote Kubernetes storage for a small cluster and databases
Have you looked at Rook?
-
Want advice on planned evolution: k3os/Longhorn --> Talos/Ceph, plus Consul and Vault
I've briefly run ceph in an external mode, you can actually use a rook deployment to manage it (sort of). Here is the documentation for doing that. For me it didn't pass my testing phase because I need better networking equipment before I can try that.
-
ATARI is still alive: Atari Partition of Fear
This article explains the data corruption issue happened in Rook in 2021. The root cause lies in an unexpected place and can also occurs in all Ceph environment. It's interesting that Rook had started to encounter this problem recently even though this problem has existed for a long time. It's due to a series of coincidences. I wrote this article because the word "Atari" used in a non-historical context in 2021.
-
How to Deploy and Scale Strapi on a Kubernetes Cluster 2/2
Rook (this is a nice article for Rook NFS)
-
Running on-premise k8s with a small team: possible or potential nightmare?
Storage: Favor any distributed storage you know to start with for Persistent Volumes: Ceph maybe via rook.io, Longhorn if you go rancher etc
-
My completely automated Homelab featuring Kubernetes
I've dealt with a lot of issues that are very close to just unplugging a node. Unfortunately on node lost, my stateful workloads using rook-ceph block storage won't migrate over to another node automatically due to an issue with rook. Stateless apps (ingress nginx, etc..) not using rook-ceph block failover to another node just fine. I've kind of accepted this for now and I know Longhorn has a feature that makes this work but I find rook-ceph to be more stable for my workloads.
Ceph
-
First time user sturggles
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadmchmod a+x cephadm./cephadm bootstrap --mon-ip 192.168.1.41
- How to retrieve bluestore performance data
- Problem with building/starting downloaded projects
-
4+1 Node Ceph Stretch Cluster - Question about HDD's with 2x replication for media
replicated_rule is what came out of the box stretch_rule comes from ceph.io or that link above or some combination.. dc_mirror_rule is intended for 2x replication pools where I don't really care about the data # ... rules rule replicated_rule { id 0 type replicated step take default step chooseleaf firstn 0 type host step emit }
-
ATARI is still alive: Atari Partition of Fear
Ceph: A open source distributed storage system
- The Coroutines Conundrum: Why Writing Unit Tests for ASIO and P2300 Proposals is a Pain, and How We Can Fix It
-
I'm looking for latest howto for ceph command line completion setup for bash/zsh: `ceph...`, `radosgw-admin...`, other useful ones, etc.
EDIT: Right after I posted that I realized those files must be maintained somewhere. So ingore me suggesting a hard option below, just follow this link: https://github.com/ceph/ceph/tree/main/src/bash_completion
-
Proxmox cluster traffic over wifi, ceph over wired?
Software defined storage via fucking wifi? ? ???????????
-
How many HDDs is too many for a pool of mirrors? When is RAID Z2 a better option?
Have you considered using the ceph file system?
-
NAS on a cluster
Can OpenMediaVault run on multiple machines but present each machine's storage space as a single drive? I know that ceph.io can do this but I'm struggling with ceph.
What are some alternatives?
longhorn - Cloud-Native distributed storage built on and for Kubernetes
Seaweed File System - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding. [Moved to: https://github.com/seaweedfs/seaweedfs]
ceph-csi - CSI driver for Ceph
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
velero - Backup and migrate Kubernetes applications and their persistent volumes
Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]
Nginx Proxy Manager - Docker container for managing Nginx proxy hosts with a simple, powerful interface
Apache Hadoop - Apache Hadoop
hub-feedback - Feedback and bug reports for the Docker Hub
lizardfs - LizardFS is an Open Source Distributed File System licensed under GPLv3.
democratic-csi - csi storage for container orchestration systems
seaweedfs - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.