Ceph
rook
Our great sponsors
Ceph | rook | |
---|---|---|
34 | 51 | |
13,088 | 11,832 | |
1.7% | 1.1% | |
10.0 | 9.9 | |
7 days ago | 6 days ago | |
C++ | Go | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Ceph
- Problem with building/starting downloaded projects
-
ATARI is still alive: Atari Partition of Fear
Ceph: A open source distributed storage system
-
Proxmox cluster traffic over wifi, ceph over wired?
Software defined storage via fucking wifi? ? ???????????
-
How many HDDs is too many for a pool of mirrors? When is RAID Z2 a better option?
Have you considered using the ceph file system?
-
NAS on a cluster
Can OpenMediaVault run on multiple machines but present each machine's storage space as a single drive? I know that ceph.io can do this but I'm struggling with ceph.
-
[Docker Swarm] How to use Let's Encrypt TLS/SSL with multiple reverse proxies?
- name: Fetch installation script ansible.builtin.uri: url: https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm dest: /home/ansible register: installation_script
-
Can someone tell me if this is possible or even a good idea?
My friends and I all use Linux and often use each others laptops or desktops when we forget ours. Then I had the idea to install Linux on a USB so I could have all my stuff and use my system on any system. Which was cool but then I ran into the same problem of forgetting it sometimes. So my new idea (which I have no idea how I'd achieve) is to use something like MooseFS or ceph or some distributed filesystem for our home partition. So then we can just login and have all our files and customization's be there almost seamlessly. I don't know how or if it would work but it seems like it could. What do you think?
-
Storage provider benchmarks round 2 part 2: Installing Rook and OpenEBS Mayastor
This post has notes on installing Ceph and OpenEBS's Mayastor. I made a bunch of mistakes and (filed some bugs) installing Rook, and most of the errors are self-inflicted (a missing CRD here, a missing RBAC rule there) but one is legitimate -- If you are planning on using Rook with a partition, use 15.2.6 for now, since the ceph-volume batch command is currently broken/differently-enhanced for partitions
rook
-
Ceph: A Journey to 1 TiB/s
I have some experience with Ceph, both for work, and with homelab-y stuff.
First, bear in mind that Ceph is a distributed storage system - so the idea is that you will have multiple nodes.
For learning, you can definitely virtualise it all on a single box - but you'll have a better time with discrete physical machines.
Also, Ceph does prefer physical access to disks (similar to ZFS).
And you do need decent networking connectivity - I think that's the main thing people think of, when they think of high hardware requirements for Ceph. Ideally 10Gbe at the minimum - although more if you want higher performance - there can be a lot of network traffic, particularly with things like backfill. (25Gbps if you can find that gear cheap for homelab - 50Gbps is a technological dead-end. 100Gbps works well).
But honestly, for a homelab, a cheap mini PC or NUC with 10Gbe will work fine, and you should get acceptable performance, and it'll be good for learning.
You can install Ceph directly on bare-metal, or if you want to do the homelab k8s route, you can use Rook (https://rook.io/).
Hope this helps, and good luck! Let me know if you have any other questions.
-
Running stateful workloads on Kubernetes with Rook Ceph
Another option is to leverage a Kubernetes-native distributed storage solution such as Rook Ceph as the storage backend for stateful components running on Kubernetes. This has the benefit of simplifying application configuration while addressing business requirements for data backup and recovery such as the ability to take volume snapshots at a regular interval and perform application-level data recovery in case of a disaster.
-
Want advice on planned evolution: k3os/Longhorn --> Talos/Ceph, plus Consul and Vault
I've briefly run ceph in an external mode, you can actually use a rook deployment to manage it (sort of). Here is the documentation for doing that. For me it didn't pass my testing phase because I need better networking equipment before I can try that.
-
ATARI is still alive: Atari Partition of Fear
This article explains the data corruption issue happened in Rook in 2021. The root cause lies in an unexpected place and can also occurs in all Ceph environment. It's interesting that Rook had started to encounter this problem recently even though this problem has existed for a long time. It's due to a series of coincidences. I wrote this article because the word "Atari" used in a non-historical context in 2021.
-
How to Deploy and Scale Strapi on a Kubernetes Cluster 2/2
Rook (this is a nice article for Rook NFS)
-
Running on-premise k8s with a small team: possible or potential nightmare?
Storage: Favor any distributed storage you know to start with for Persistent Volumes: Ceph maybe via rook.io, Longhorn if you go rancher etc
-
My completely automated Homelab featuring Kubernetes
I've dealt with a lot of issues that are very close to just unplugging a node. Unfortunately on node lost, my stateful workloads using rook-ceph block storage won't migrate over to another node automatically due to an issue with rook. Stateless apps (ingress nginx, etc..) not using rook-ceph block failover to another node just fine. I've kind of accepted this for now and I know Longhorn has a feature that makes this work but I find rook-ceph to be more stable for my workloads.
-
[HELP] PXE Boot without data loss
Third, it sounds like you're building a cluster. For this you'll either want a central file server. Or better, setup a distributed storage system. For example a Ceph cluster managed by Rook. This way you can fully wipe a single node and the system will be able to recover/replicate thed data.
- SaaS Deployment Options
-
For those managing k8s clusters, are you using Rook + Ceph?
I just helped write a quick summary of just why you can trust your persistent workloads to Ceph, managed by Rook and it occurred to me that... I'm probably wrong.
What are some alternatives?
longhorn - Cloud-Native distributed storage built on and for Kubernetes
Seaweed File System - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding. [Moved to: https://github.com/seaweedfs/seaweedfs]
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]
Apache Hadoop - Apache Hadoop
ceph-csi - CSI driver for Ceph
lizardfs - LizardFS is an Open Source Distributed File System licensed under GPLv3.
seaweedfs - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.
LeoFS - The LeoFS Storage System
OpenAFS - Fork of OpenAFS from git.openafs.org for visualization
XtreemFS - Distributed Fault-Tolerant File System
SheepDog - Distributed Storage System for QEMU