longhorn
zfs-localpv
longhorn | zfs-localpv | |
---|---|---|
84 | 12 | |
6,743 | 486 | |
1.7% | 1.9% | |
9.5 | 8.5 | |
6 days ago | 4 days ago | |
Shell | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
longhorn
- Longhorn: Cloud native distributed block storage for Kubernetes
-
Kubernetes homelab - Learning by doing, Part 4: Storage
Distributed storage systems enable us to store data that can be made available clusterwide. Excellent! But dynamically apportioning storage across a multi-node cluster is a very complex job. So this is another area where Kubernetes typically outsources the job to plugins (e.g. Cloud providers like Azure or AWS, or systems like Rook or Longhorn).
-
Setting Up The Home Lab: Setting up Kubernetes Using Ansible
Since I want to play with Kubernetes anyway, I'll set up a k8s cluster. It will have 2 master and 4 worker nodes. Each VM will have 4 cores, 8 GB of RAM, a 32 GB root virtual disk, and a 250 GB data virtual disk for Longhorn volumes. I'll create an ansible user via cloud-init and allow access via SSH.
-
My First Kubernetes: k3s 'cluster' on 3 Orange Pi Zero 3's
That's a sweet setup.
Have you come across Longhorn[0]?
I wanted to have a look at that for storage when I was using Pis as it theoretically should be lighter-weight than Ceph, who knows. Didn't get around to it though.
[0] https://longhorn.io/
-
Clusters Are Cattle Until You Deploy Ingress
Dan: Argo CD is the first tool I install. For AWS, I will add Karpenter to manage costs. I will also use Longhorn for on-prem storage solutions, though I'd need ingress. Depending on the situation, I will install Argo CD first and then one of those other two.
-
Why Kubernetes Was a Mistake for My SaaS Business (š¤Æ)
I overcome this issue with Longhorn which is a native distributed block storage for Kubernetes and supports by default RWM and not only RWO (ReadWriteOnce).
-
Diskomator ā NVMe-TCP at your fingertips
I'm looking forward to Longhorn[1] taking advantage of this technology.
[1]: https://github.com/longhorn/longhorn
-
K3s ā Lightweight Kubernetes
I've been using a 3 nuc (actually Ryzen devices) k3s on SuSE MicroOS https://microos.opensuse.org/ for my homelab for a while, and I really like it. They made some really nice decisions on which parts of k8s to trim down and which Networking / LB / Ingress to use.
The option to use sqlite in place of etcd on an even lighter single node setup makes it super interesting for even lighter weight homelab container environment setups.
I even use it with Longhorn https://longhorn.io/ for shared block storage on the mini cluster.
If anyone uses it with MicroOS, just make sure you switch to kured https://kured.dev/ for the transactional-updates reboot method.
I'd love to compare it against Talos https://www.talos.dev/ but their lack of support for a persistent storage partition (only separate storage device) really hurts most small home / office usage I'd want to try.
-
Difference between snapshot-cleanup and snapshot-delete in Longhorn recurring job?
Hi,i was wondering the same. Found more information here in this document: https://github.com/longhorn/longhorn/blob/v1.5.x/enhancements/20230103-recurring-snapshot-cleanup.md
- The Next Gen Database Servers Powering Let's Encrypt(2021)
zfs-localpv
-
ZFS 2.2.0 (RC): Block Cloning merged
I use it in Kubernetes via https://github.com/openebs/zfs-localpv
The PersistentVolume API is a nice way to divvy up a shared resource across different teams, and using ZFS for that gives us the snapshotting, deduplication, and compression for free. For our workloads, it benchmarked faster than XFS so it was a no-brainer.
- openebs/zfs-localpv: CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using ZFS.
-
OpenEBS on MicroK8S on Hetzner
Last few months I experimented more and more with all OpenEBS solutions that fit small Kubernetes cluster, using MicroK8S and Hetzner Cloud for a real experience.
- Openebs ?? Or equivalent
-
Network Storage on On-Prem Barebones Machine
I would investigate https://openebs.io/ https://portworx.com/ https://longhorn.io/ if you are forced to you can mount ISCSI on the kublet and feed it to one of those solutions. Keep in mind most of the big guys buy some sort of managed solution that you can point a CSI like trident https://netapp-trident.readthedocs.io
-
Ask HN: What are some fun projects to run on a home K8s cluster?
What are some cool projects to self hosted on a home Raspberry Pi (64 bit) Kubernetes cluster (Helm charts). arm64 support is a must. A lot of projects only build amd64 Docker containers which don't run on my cluster.
I currently run:
- obenebs (provides abstraction for using local k8s worker disks as PVC mounts when running on-prem) -- https://openebs.io/
-
Finally got around to doing that Ceph on ZFS experiment
I didn't set anything actually -- I need to look into whether OpenEBS ZFS LocalPV can facilitate passing ZVOL options (I don't think it can just yet). The only tuning I did on the storage class was the usual ZFS-level options.
-
My self-hosting infrastructure, fully automated
What do you use to provision Kubernetes persistent volumes on bare metal? Iām looking at open-ebs (https://openebs.io/).
Also, when you bump the image tag in a git commit for a given helm chart, how does that get deployed? Is it automatic, or do you manually run helm upgrade commands?
- Jinja2 not formatting my text correctly. Any advice?
-
Building a "complete" cluster locally
Ideas from my kubernetes experience: * Cert-Manager is very popular and almost a must-have if you terminate SSL inside the cluster * Backups using velero * A dashboard/UI is actually very helpful to quickly browse resources, client tools like k9s are fine too * Secret: Management: Bitnami Sealed Secrets is the second big project in that space * I would add Loki to aggregate Logs * Never heard of ory. Usually I see (dex)[https://dexidp.io/] or keycloak used for Authentication * I like to run OpenEBS as in-cluster storage. * Istio isn't compatible with the upcomming ServiceMeshInterface (i think), so the trend seem to go toward Linkerd * Some Operator to deploy your favorite Database, is also a nice learning exercise.
What are some alternatives?
rook - Storage Orchestration for Kubernetes
democratic-csi - csi storage for container orchestration systems
nfs-subdir-external-provisioner - Dynamic sub-dir volume provisioner on a remote NFS server.
lvm-localpv - Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend LVM2 data storage stack.
k3sup - bootstrap K3s over SSH in < 60s š
Mayastor - Dynamically provision Stateful Persistent Replicated Cluster-wide Fabric Volumes & Filesystems for Kubernetes that is provisioned from an optimized NVME SPDK backend data storage stack.