ceph-balancer
An alternative Ceph placement optimizer, aiming for maximum storage capacity through equal OSD utilization. (by TheJJ)
ceph-ansible
Ansible playbooks to deploy Ceph, the distributed filesystem. (by ceph)
ceph-balancer | ceph-ansible | |
---|---|---|
1 | 5 | |
94 | 1,638 | |
- | 0.5% | |
7.8 | 8.8 | |
about 1 month ago | 6 days ago | |
Python | Python | |
GNU General Public License v3.0 only | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ceph-balancer
Posts with mentions or reviews of ceph-balancer.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Unbalanced OSD
after turning the balancer on, set to upmap, and maybe increasing PGs; if you still need more balance you can always give TheJJ's balancer a try. https://github.com/TheJJ/ceph-balancer
ceph-ansible
Posts with mentions or reviews of ceph-ansible.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Issue with starting OSDs - every host has same cluster_addr and public_addr
I'm having some struggles with my ceph Octopus cluster, that I just converted from ceph-ansible to cephadm deployed. I used the adopt playbook here (https://github.com/ceph/ceph-ansible/blob/main/infrastructure-playbooks/cephadm-adopt.yml) and it reported all successful . The ceph health is all ok. However, when I try to restart an osd with 'ceph orch daemon restart ', the osd does not come up with the below error
- OSDs won 't come back up after host reboot (Pacific)
- Ceph-ansible: oups I lost all my group_vars...
-
How to use a partition on an SSD as a WAL/DB device?
Are there any workarounds? From my searches online, all I've found are workaround for older releases (octopus), or different installation methods (ansible) such as this: https://github.com/ceph/ceph-ansible/issues/4790
-
ceph-ansible with multipath devices
yea, that's what I did. might be a ceph-volume bug (not the first I've encountered...). Pretty much got this in the logs: https://github.com/ceph/ceph-ansible/issues/4735
What are some alternatives?
When comparing ceph-balancer and ceph-ansible you can also consider the following projects:
cephadm-ansible - ansible playbooks to be used with cephadm
community.vmware - Ansible Collection for VMware
radosgw_usage_exporter - Prometheus exporter for scraping Ceph RADOSGW usage data.
Ansible - Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy and maintain. Automate everything from code deployment to network configuration to cloud management, in a language that approaches plain English, using SSH, with no agents to install on remote systems. https://docs.ansible.com.
ansible-junos-stdlib - Junos modules for Ansible