local-path-provisioner
pluto
local-path-provisioner | pluto | |
---|---|---|
30 | 18 | |
2,003 | 1,965 | |
1.8% | 0.9% | |
6.1 | 5.8 | |
2 days ago | about 1 month ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
local-path-provisioner
-
Deploy Ghost with MySQL DB replication using helm chart
Deploy local-path-provisioner storage class but it does not support readwritemany so for high availability of your Kubernetes cluster better to use longhorn
-
lvp: Local Volume CSI Provisioner -- Dynamic PV Provisioning for your Home Cluster
I use this one. I'm waiting for the day it's combined with syncthing to sync across all nodes. https://github.com/rancher/local-path-provisioner
-
issues with pv retaining data on local-path SC
So I have this single node k3s cluster. k3s uses local-path (https://github.com/rancher/local-path-provisioner) as default SC that allows one to create dynamic volumes using nodes local storage.
-
How to format drives for local persistent volumes
Just create 1 single partition and format it with whatevery filesystem you like. And then use ranchers local-path-provisioner which will create a folder per PV (k3s has this integrated by default).
-
Persisting data in a dynamic volume?
Tinkering locally with local path provisioner (https://github.com/rancher/local-path-provisioner), I find that I can delete and re-create the pod, and the data persists on disk. However, if I delete the PVC, when I recreate the PVC, a new directory on disk is created.
-
Issues with "victoria-metrics-k8s-stack", monitoring k8s targets
It is better to use https://github.com/rancher/local-path-provisioner (or similar) for this case which will do PVC on local directories because manually linking PV<>PVC will not work.
-
single node k8s on nuc - homelab/prod - storage question
Since you only have one physical node anyway, I would just make the cluster a single-node cluster (1 VM) and use local storage on that VM. I’m biased though because this is what I do (I run K3s and use local path provisioner).
-
Using local disks for both K8s workloads, and exporting via SMB?
Rancher's Local Path Provisioner - From reading, seems to just use HostPath or Local PVs under the hood, but adds dynamic provisoning
-
Kubernetes: How to Persistent Storage
With any of those tools, you'd implement a network storage on top of a network storage. I would go with mouting few volumes per node +local storage like (https://github.com/rancher/local-path-provisioner).
- There doesn't seam to be any good distributed block storage for Kubernetes
pluto
-
Upgrading Hundreds of Kubernetes Clusters
We also leverage tools like Kubent, popeye, kdave, and Pluto to help us manage API deprecations (when Kubernetes deprecates features in updates) and ensure the overall health of our infrastructure.
- Updating from 1.25.15 to 1.26.10
-
How do you handle continuous k8s cluster version upgrades in your organization?
You have to constantly run tools like https://github.com/doitintl/kube-no-trouble / https://github.com/FairwindsOps/pluto.
- How do you guys monitor K8s core services new versions
-
eks cluster upgrade Anyone has done eks cluster upgrade to upgrade the cluster from 1.21 to 1.22 there are some api resources kind need to changed, which need changes in manifest file changes. how do we identify the helm charts that are using these resources ? https://docs.aws.amazon.com/eks/lat
You might like https://github.com/FairwindsOps/pluto
- Kubernetes upgrade
-
Upgrading EKS from k8s version 1.21 to 1.24
Run Pluto against the old cluster to check for outdated APIs in your namespaces: https://github.com/FairwindsOps/pluto
-
kubernetes provider resources v1 vs non-v1 is it just me or is this dumb?
I knew it was unsupported so about 6 months ago I had started an effort to switch to Kyverno, which is far better and actually supported. The version of Kyverno I was using had a v1beta1 AdmissionController. Fortunately that was in a helm chart so easily caught by pluto before my upgrade.
-
Helm chart - fluent-bit
If you're looking for API deprecations specifically you can look into pluto from fairwinds.
-
Updating EKS to 1.22: dealing with deprecated APIs on ALB Ingresses
you can use https://github.com/FairwindsOps/pluto to check for api deprecations before updating the cluster.
What are some alternatives?
sig-storage-local-static-provisioner - Static provisioner of local volumes
kube-no-trouble - Easily check your clusters for use of deprecated APIs
topolvm - Capacity-aware CSI plugin for Kubernetes
silver-surfer - Kubernetes objects api-version compatibility checker and provides migration path for K8s objects and prepare it for cluster upgrades
csi-lib-utils - Common code for Kubernetes CSI sidecar containers (e.g. `external-attacher`, `external-provisioner`, etc.)
helm - The Kubernetes Package Manager
kind - Kubernetes IN Docker - local clusters for testing Kubernetes
rbac-manager - A Kubernetes operator that simplifies the management of Role Bindings and Service Accounts.
nfs-ganesha-server-and-external-provisioner - NFS Ganesha Server and Volume Provisioner.
helmfile - Deploy Kubernetes Helm Charts
csi-driver-nfs - This driver allows Kubernetes to access NFS server on Linux node.
polaris - Validation of best practices in your Kubernetes clusters