velero
rook
Our great sponsors
velero | rook | |
---|---|---|
42 | 51 | |
8,132 | 11,832 | |
2.0% | 1.1% | |
9.6 | 9.9 | |
7 days ago | 6 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
velero
-
What is the proper, kubernetes native way of working with multiple clusters for DR, HA?
Openshift last I looked used Velero under the covers for the functionality, which works fine in standard kubernetes. Most if not all that Openshift does is Open source.
-
Is there a way to clone an existing Azure Kubernetes Cluster?
Valero
-
Tool for dumping manifests from your Kubernetes clusters
While not discounting OP or the work in this repo (seems like a fun k8s/go project), folks might check out Velero for this purpose if they're looking to rely on this kind of export in prod: https://github.com/vmware-tanzu/velero
-
Kubernetes postgres backups
For Kubernetes-land, https://velero.io/ is awesome - but I haven't used it for online-database backups yet. If you're exploring, I'd checkout Velero - if you just need something to work reliably, I'd checkout Percona.
-
(Longhorn/K3s) Failed cluster, made new cluster, are PVs salvageable?
You can also leverage https://velero.io/ to backup both cluster state and pvc state to s3
-
How to backup / snapshot and restore full EKS cluster(s)?
I use this https://velero.io and it works great.
- Multi cluster vs namespaces
-
BorgBackup, Deduplicating archiver with compression and encryption
I'm using Velero to do this in my toy kubernetes clusters. It uses Restic under the hood and can store things into S3. By default it will take a filesystem-level copy of whatever is on a pv. It looks like it supports hooks, e.g. to run pg_backup like you mentioned, but I haven't used them.
-
convert storageclasses of existing PVs
Check out https://velero.io
- automated volume snapshots in gke
rook
-
Ceph: A Journey to 1 TiB/s
I have some experience with Ceph, both for work, and with homelab-y stuff.
First, bear in mind that Ceph is a distributed storage system - so the idea is that you will have multiple nodes.
For learning, you can definitely virtualise it all on a single box - but you'll have a better time with discrete physical machines.
Also, Ceph does prefer physical access to disks (similar to ZFS).
And you do need decent networking connectivity - I think that's the main thing people think of, when they think of high hardware requirements for Ceph. Ideally 10Gbe at the minimum - although more if you want higher performance - there can be a lot of network traffic, particularly with things like backfill. (25Gbps if you can find that gear cheap for homelab - 50Gbps is a technological dead-end. 100Gbps works well).
But honestly, for a homelab, a cheap mini PC or NUC with 10Gbe will work fine, and you should get acceptable performance, and it'll be good for learning.
You can install Ceph directly on bare-metal, or if you want to do the homelab k8s route, you can use Rook (https://rook.io/).
Hope this helps, and good luck! Let me know if you have any other questions.
-
Running stateful workloads on Kubernetes with Rook Ceph
Another option is to leverage a Kubernetes-native distributed storage solution such as Rook Ceph as the storage backend for stateful components running on Kubernetes. This has the benefit of simplifying application configuration while addressing business requirements for data backup and recovery such as the ability to take volume snapshots at a regular interval and perform application-level data recovery in case of a disaster.
-
Want advice on planned evolution: k3os/Longhorn --> Talos/Ceph, plus Consul and Vault
I've briefly run ceph in an external mode, you can actually use a rook deployment to manage it (sort of). Here is the documentation for doing that. For me it didn't pass my testing phase because I need better networking equipment before I can try that.
-
ATARI is still alive: Atari Partition of Fear
This article explains the data corruption issue happened in Rook in 2021. The root cause lies in an unexpected place and can also occurs in all Ceph environment. It's interesting that Rook had started to encounter this problem recently even though this problem has existed for a long time. It's due to a series of coincidences. I wrote this article because the word "Atari" used in a non-historical context in 2021.
-
How to Deploy and Scale Strapi on a Kubernetes Cluster 2/2
Rook (this is a nice article for Rook NFS)
-
Running on-premise k8s with a small team: possible or potential nightmare?
Storage: Favor any distributed storage you know to start with for Persistent Volumes: Ceph maybe via rook.io, Longhorn if you go rancher etc
-
My completely automated Homelab featuring Kubernetes
I've dealt with a lot of issues that are very close to just unplugging a node. Unfortunately on node lost, my stateful workloads using rook-ceph block storage won't migrate over to another node automatically due to an issue with rook. Stateless apps (ingress nginx, etc..) not using rook-ceph block failover to another node just fine. I've kind of accepted this for now and I know Longhorn has a feature that makes this work but I find rook-ceph to be more stable for my workloads.
-
[HELP] PXE Boot without data loss
Third, it sounds like you're building a cluster. For this you'll either want a central file server. Or better, setup a distributed storage system. For example a Ceph cluster managed by Rook. This way you can fully wipe a single node and the system will be able to recover/replicate thed data.
- SaaS Deployment Options
-
For those managing k8s clusters, are you using Rook + Ceph?
I just helped write a quick summary of just why you can trust your persistent workloads to Ceph, managed by Rook and it occurred to me that... I'm probably wrong.
What are some alternatives?
longhorn - Cloud-Native distributed storage built on and for Kubernetes
ceph-csi - CSI driver for Ceph
k8s-object-dumper - Kubernetes object dumper for use as a pre backup command in K8up.
prometheus - The Prometheus monitoring system and time series database.
Ceph - Ceph is a distributed object, block, and file storage platform
istio - Connect, secure, control, and observe services.
Scaleway-cli - Command Line Interface for Scaleway
Nginx Proxy Manager - Docker container for managing Nginx proxy hosts with a simple, powerful interface
democratic-csi - csi storage for container orchestration systems
hub-feedback - Feedback and bug reports for the Docker Hub
vsphere-csi-driver - vSphere storage Container Storage Interface (CSI) plugin
piraeus-operator - The Piraeus Operator manages LINSTOR clusters in Kubernetes.