operator
longhorn
operator | longhorn | |
---|---|---|
5 | 82 | |
1,272 | 6,520 | |
1.4% | 2.7% | |
8.9 | 9.5 | |
6 days ago | 5 days ago | |
Go | Shell | |
GNU Affero General Public License v3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
operator
-
My recently deployed media apps in ArgoCD, migrating from Terraform.
minio has a k8 operator as well which I use at work: https://github.com/minio/operator
- Using Minio operator in AKS cluster. Minio tenants stuck in "Waiting for pod get ready" even though the pods are ready. What to do? Solution on git issues not helping :(
-
Need help adding TLS certificates to a tenant in a k3s cluster
So far I have seen some minio documentation (1, 2, ) about how to add the certificates correctly but I haven't been able to set it up correctly :-(
-
I'm slightly struggling to understand MinIO concepts
Yesterday, I tried to install MinIO, as I'm looking for a distributed object storage solution for Spark jobs on Kubernetes. I read the docs, went to the Github repo and followed the steps to install it using Helm. Here's what I did :
-
Monitor Minio with Prometheus on Kubernetes
Thank you! I assume you refer to this operator here, am I right? I will have a look at this, and also at the annotations possibility. Another question: is it deliberate that no Helm install is available?
longhorn
-
Kubernetes homelab - Learning by doing, Part 4: Storage
Distributed storage systems enable us to store data that can be made available clusterwide. Excellent! But dynamically apportioning storage across a multi-node cluster is a very complex job. So this is another area where Kubernetes typically outsources the job to plugins (e.g. Cloud providers like Azure or AWS, or systems like Rook or Longhorn).
-
Setting Up The Home Lab: Setting up Kubernetes Using Ansible
Since I want to play with Kubernetes anyway, I'll set up a k8s cluster. It will have 2 master and 4 worker nodes. Each VM will have 4 cores, 8 GB of RAM, a 32 GB root virtual disk, and a 250 GB data virtual disk for Longhorn volumes. I'll create an ansible user via cloud-init and allow access via SSH.
-
My First Kubernetes: k3s 'cluster' on 3 Orange Pi Zero 3's
That's a sweet setup.
Have you come across Longhorn[0]?
I wanted to have a look at that for storage when I was using Pis as it theoretically should be lighter-weight than Ceph, who knows. Didn't get around to it though.
[0] https://longhorn.io/
-
Clusters Are Cattle Until You Deploy Ingress
Dan: Argo CD is the first tool I install. For AWS, I will add Karpenter to manage costs. I will also use Longhorn for on-prem storage solutions, though I'd need ingress. Depending on the situation, I will install Argo CD first and then one of those other two.
-
Why Kubernetes Was a Mistake for My SaaS Business (🤯)
I overcome this issue with Longhorn which is a native distributed block storage for Kubernetes and supports by default RWM and not only RWO (ReadWriteOnce).
-
Diskomator – NVMe-TCP at your fingertips
I'm looking forward to Longhorn[1] taking advantage of this technology.
[1]: https://github.com/longhorn/longhorn
-
K3s – Lightweight Kubernetes
I've been using a 3 nuc (actually Ryzen devices) k3s on SuSE MicroOS https://microos.opensuse.org/ for my homelab for a while, and I really like it. They made some really nice decisions on which parts of k8s to trim down and which Networking / LB / Ingress to use.
The option to use sqlite in place of etcd on an even lighter single node setup makes it super interesting for even lighter weight homelab container environment setups.
I even use it with Longhorn https://longhorn.io/ for shared block storage on the mini cluster.
If anyone uses it with MicroOS, just make sure you switch to kured https://kured.dev/ for the transactional-updates reboot method.
I'd love to compare it against Talos https://www.talos.dev/ but their lack of support for a persistent storage partition (only separate storage device) really hurts most small home / office usage I'd want to try.
-
Difference between snapshot-cleanup and snapshot-delete in Longhorn recurring job?
Hi,i was wondering the same. Found more information here in this document: https://github.com/longhorn/longhorn/blob/v1.5.x/enhancements/20230103-recurring-snapshot-cleanup.md
- The Next Gen Database Servers Powering Let's Encrypt(2021)
-
Ask HN: Any of you run Kubernetes clusters in-house?
Been running k3s for personal projects etc for some time now on a cluster of raspberry pies. Why pies? Were cheap at the time and wanted to play with arm. I don’t think I would suggest them right now. Nucs will be much better value for money.
Some notes:
Using helm and helmfile https://github.com/helmfile/helmfile for deployments. Seems to work pretty nicely and is pretty flexible.
As I’m using a consumer internet provider ingress is done through cloudflare tunnels https://github.com/cloudflare/cloudflare-ingress-controller in order to not have to deal with ip changes and not have to expose ports.
Persistent volumes were my main issue when previously attempting this, and what changed everything for me was longhorn. https://longhorn.io Make sure to backup your volumes.
Really hyped for https://docs.computeblade.com/ xD
What are some alternatives?
postgres-operator - Postgres operator for Kubernetes
rook - Storage Orchestration for Kubernetes
certgen - A dead simple tool to generate self signed certificates for MinIO TLS deployments
nfs-subdir-external-provisioner - Dynamic sub-dir volume provisioner on a remote NFS server.
tofu-controller - A GitOps OpenTofu and Terraform controller for Flux
zfs-localpv - Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend ZFS data storage stack.