Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Top 21 Ceph Open-Source Projects
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
Bareos
Bareos is a cross-network Open Source backup solution (licensed under AGPLv3) which preserves, archives, and recovers data from all major operating systems.
-
kURL
Production-grade, airgapped Kubernetes installer combining upstream k8s with overlays and popular components
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
benji
Benji Backup: A block based deduplicating backup software for Ceph RBD images, iSCSI targets, image files and block devices
-
ceph-balancer
An alternative Ceph placement optimizer, aiming for maximum storage capacity through equal OSD utilization.
-
cluster
Lab Cluster - Kubernetes (k3s) cluster managed by GitOps (Flux). Built on Proxmox using Terraform amd Ansible. (by dfroberg)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
I have some experience with Ceph, both for work, and with homelab-y stuff.
First, bear in mind that Ceph is a distributed storage system - so the idea is that you will have multiple nodes.
For learning, you can definitely virtualise it all on a single box - but you'll have a better time with discrete physical machines.
Also, Ceph does prefer physical access to disks (similar to ZFS).
And you do need decent networking connectivity - I think that's the main thing people think of, when they think of high hardware requirements for Ceph. Ideally 10Gbe at the minimum - although more if you want higher performance - there can be a lot of network traffic, particularly with things like backfill. (25Gbps if you can find that gear cheap for homelab - 50Gbps is a technological dead-end. 100Gbps works well).
But honestly, for a homelab, a cheap mini PC or NUC with 10Gbe will work fine, and you should get acceptable performance, and it'll be good for learning.
You can install Ceph directly on bare-metal, or if you want to do the homelab k8s route, you can use Rook (https://rook.io/).
Hope this helps, and good luck! Let me know if you have any other questions.
Project mention: Issue with starting OSDs - every host has same cluster_addr and public_addr | /r/ceph | 2023-08-05I'm having some struggles with my ceph Octopus cluster, that I just converted from ceph-ansible to cephadm deployed. I used the adopt playbook here (https://github.com/ceph/ceph-ansible/blob/main/infrastructure-playbooks/cephadm-adopt.yml) and it reported all successful . The ceph health is all ok. However, when I try to restart an osd with 'ceph orch daemon restart ', the osd does not come up with the below error
So, apparently ceph doesn't provide this feature anymore and, while for rbd this works fine, I'm left a bit confused. Following these examples https://github.com/ceph/ceph-csi/tree/devel/examples/cephfs I thought it would be enough, but not sure anymore.
Wow, thanks! Yes, we created & maintain the kurl.sh project that OP mentioned (disclaimer: I work there). Our customers (HashiCorp, BigID, Smartbear etc) basically get all the tooling to do all of the commercial things the OP mentioned (combining it with Helm or KOTS our installer, Troubleshoot.sh for disconnected troubleshooting etc).
Overbuilt and OTT? Sure... but this works fantastically for my use case. I have current backups of everything except my media library because of the size of it; my VM's are all backed up to my Synology nightly using Backy2, my application data gets dumped to that same Synology NAS nightly as well, and all of that also gets synced to Glacier deep storage once a week using Duplicity. I'm going to be adding a new ZFS array later in the year to replace my Synology and hopefully I'll build it out with enough storage to take my media library as well.
I sincerely recommend checking out Microceph, it is designed specifically for smaller edge clusters and Homelabs.
Project mention: Is there a nixos solution for hyperconverged infrastructure? | /r/NixOS | 2023-05-31Skyflake, which lets you configure a Nomad cluster of NixOS micro VMs running on NixOS hosts: https://github.com/astro/skyflake
We extract all metrics from radosgw with this https://github.com/blemmenes/radosgw_usage_exporter for generate alerts from alertmanager
Ceph related posts
- Ceph: A Journey to 1 TiB/s
- Bare-Metal Kubernetes, Part I: Talos on Hetzner
- Issue with starting OSDs - every host has same cluster_addr and public_addr
- People who run Nextcloud in Docker: Where do you store your data/files? In a Docker volume, or on a remote server/NAS?
- Are small ceph clusters viable?
- kubernetes snapshot of cephfs pvc
- Genius or Stupid? Looking for feedback on Ceph/Rook low power nodes for homelab
-
A note from our sponsor - InfluxDB
www.influxdata.com | 19 Apr 2024
Index
What are some of the best open-source Ceph projects? This list will help you:
Project | Stars | |
---|---|---|
1 | rook | 11,890 |
2 | ceph-ansible | 1,633 |
3 | ceph-csi | 1,147 |
4 | Bareos | 931 |
5 | Rome | 812 |
6 | kURL | 718 |
7 | cn | 231 |
8 | backy2 | 189 |
9 | microceph | 171 |
10 | skyflake | 146 |
11 | benji | 136 |
12 | ceph-balancer | 92 |
13 | cephadm-ansible | 86 |
14 | ceph-nvmeof | 69 |
15 | kronform | 58 |
16 | cluster | 57 |
17 | Ceph-Pi | 51 |
18 | radosgw_usage_exporter | 46 |
19 | cloud-native-platform | 28 |
20 | ceph_proxmox_scripts | 14 |
21 | OpenMetal OpenStack Documentation | 2 |