|5 days ago||4 days ago|
|Apache License 2.0||Apache License 2.0|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Is there any way to exclude new nodes to a K3S cluster from being longhorn storage nodes?
1 project | reddit.com/r/rancher | 3 Dec 2021
You can use node selector rules to force Longhorn to use some nodes for storage. You can also add taints to create dedicated storage nodes. Please see the GitHub link listed below. https://github.com/longhorn/longhorn/issues/1633
Separating storage and worker nodes with Longorn
1 project | reddit.com/r/rancher | 25 Nov 2021
In Longhorn GUI the volume shows as Detached. It would attach if I do is manually but that makes no impact to the Pod. I've seen people having similar issues in 2614 and have tried with create-default-disk-on-labeled-nodes but that does not seem to work the way I would like to. If I deploy without nodeSelector and disable scheduling on worker nodes it works fine.
Building a "complete" cluster locally
24 projects | reddit.com/r/kubernetes | 31 Oct 2021
rook/longhorn for distributed storage
Dynamic storage class in K3s on docker
2 projects | reddit.com/r/kubernetes | 15 Oct 2021
Longhorn is exactly what you're looking for. It's easy to set up and backs up to NFS.
Rancher Desktop, a Docker Desktop Replacement
14 projects | news.ycombinator.com | 11 Oct 2021
More than performance, I am worried about security. They don't seem to have considered it at all when building it: https://github.com/longhorn/longhorn/issues/1983
Probably not a good recommendation until this hole is plugged.14 projects | news.ycombinator.com | 11 Oct 2021
They've also got Longhorn, a distributed container-attached storage solution that's very simple to understand and easy to deploy. Performance is another thing but that's the same with all of the general networked storage solutions (Ceph included).
Rancher's got a well deserved good impression in my mind, though early on I avoided it since it seemed like they were building a walled garden.
Should We Replace Docker Desktop With Rancher Desktop?
6 projects | reddit.com/r/kubernetes | 11 Oct 2021
All of their projects have been fully open, monetized with enterprise support. They straight up fully donated their storage project to the CNCF. They've been good stewards of all the tools so far with ones like k3s greatly benefitting the community. And have put a decent amount of effort into putting resources supporting issues on even non-paid issues on the GitHub projects.
ZFS and Ceph
2 projects | reddit.com/r/zfs | 22 Sep 2021
Also just in case the above wasn't enough I'm probably actually going to run Longhorn because it's just SO much easier than Ceph to manage and gives me the availability for a ~40% perf loss AFAIK.
Those running Kubernetes, what is in your core stack? And what "gem" can you not live without?
8 projects | reddit.com/r/homelab | 19 Sep 2021
Longhorn for Kubernetes storage. It lets me automatically provision PersistentVolumes and snapshot/back them up to S3. I've read that other things such as Ceph and Rook are more stable and I've definitely been bit by some non-critical bugs, but the project is very active and bugs are squashed quickly (though they don't release very frequently). This is the gem I can't live without, being able to snapshot, backup and restore things like my Plex volume has saved me at least a dozen times already.
Selfhosted storage recommendation
1 project | reddit.com/r/selfhosted | 18 Sep 2021
Another consideration is using something like https://longhorn.io/, it's a bit like ceph, and supports mirroring over two nodes
Moving to Kubernetes
3 projects | reddit.com/r/kubernetes | 13 Jan 2022
Are you working with on-premise compute resources or public-cloud based ones? In the latter case, I'd like to point out the need to choose and deploy an Ingress Controller behind which deploy the PHP-powered sites (to minimize the need for public IPs). If the former, you can easily go with MetaLB https://metallb.universe.tf/ to expose each "site" directly, but an Ingress Controller like Kong, Traefik v2 or others could be however handy to accomodate specific needs (OAuth/JWT to access one or more of these, for example). The PHP-enabled website could be deployed with a "multi-container" Pod/Deployment setup as, for example, described here https://matthewpalmer.net/kubernetes-app-developer/articles/php-fpm-nginx-kubernetes.html
NooB here again, with an ingress question
1 project | reddit.com/r/kubernetes | 12 Jan 2022
Oh. I see, then you probably need to deploy something like MetalLB that satisfies the request for a load balancer
how do I expose a mosquitto broker using the same ports the container uses ?
2 projects | reddit.com/r/kubernetes | 5 Jan 2022
OpenELB Joins the CNCF Sandbox, Making Service Exposure in Private Environments Easier
1 project | reddit.com/r/kubernetes | 20 Dec 2021
This is a long standing project (Replace the MetalLB configmap with k8s custom resources), it was suggested by the original maintainer pretty much at the beginning of the project.
libvirt-k8s-provisioner - From 0 to a fully working k8s cluster up to 1.23 in less than 8 minutes
3 projects | reddit.com/r/kubernetes | 14 Dec 2021
metalLB to manage bare-metal LoadBalancer services - WIP - Only L2 configuration can be set-up via playbook.
eBPF will help solve service mesh by getting rid of sidecars
4 projects | news.ycombinator.com | 9 Dec 2021
We reused the LB as much as possible to avoid the BGP thing. There's a thing called MetalLB designed around that though.
POD External IP
1 project | reddit.com/r/kubernetes | 9 Dec 2021
That's neat. Looks like metallb uses it too according to go.mod
Port management in your local Kubernetes cluster
2 projects | dev.to | 28 Nov 2021
My latest attempt was MetalLB. Even though I didn't manage to make it work, it bound port 8080 on my machine: none of my other regular Spring demos could work.
Pi k8s! This is my pi4-8gb powered hosted platform. 8 pi4s for kubeadm k8s cluster, and one for a not so 'nas' share. I use gitlab runners with helmfile to manage my applications. Running over a year and finally passed the CKA with most of my practice on this plus work clusters. AMA welcome!
12 projects | reddit.com/r/selfhosted | 24 Oct 2021
I use metalLB as my load balancer because it is software based and any of the nodes can respond to the ip related to the deployment. https://metallb.universe.tf/
Building new LAB, "need" to eliminate single point of failure
1 project | reddit.com/r/homelab | 22 Oct 2021
If you just need balacing k3s services like controle-plane or ingresses you can use kube-vip with metallb.
What are some alternatives?
kube-vip - Kubernetes Control Plane Virtual IP and Load-Balancer
rook - Storage Orchestration for Kubernetes
calico - Cloud native networking and network security
ingress-nginx - NGINX Ingress Controller for Kubernetes
zfs-localpv - CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using ZFS.
external-dns - Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
kube-plex - Scalable Plex Media Server on Kubernetes -- dispatch transcode jobs as pods on your cluster!
cert-manager - Automatically provision and manage TLS certificates in Kubernetes
k3sup - bootstrap Kubernetes with k3s over SSH < 1 min 🚀
k3s - Lightweight Kubernetes
PowerDNS - PowerDNS Authoritative, PowerDNS Recursor, dnsdist