nfs-subdir-external-provisioner
flux2
Our great sponsors
nfs-subdir-external-provisioner | flux2 | |
---|---|---|
48 | 83 | |
2,364 | 5,927 | |
4.3% | 3.1% | |
4.2 | 9.2 | |
20 days ago | about 23 hours ago | |
Shell | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nfs-subdir-external-provisioner
-
Investigating a failed VolumeSnapshot with NFS on Kubernetes
Using nfs-subdir-external-provisioner instead of csi-driver-nfs
-
Database corruption
I am trying to run sonarr inside my k3s cluster. Since I have multiple nodes, in order to keep data persistant I have been using a NAS and the Kubernetes NFS external provisioner as my Storage Class.
-
Utilizing traditional storage in a modern way
There's this, if you want your nfs storage available to pods as PVCs, with some limitations: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
-
Help me What to Choose?
NFS Provisioner
- [GUIDE] How to deploy the Servarr stack on Kubernetes with Terraform!
-
Longhorn alternatives
Depends on how much resiliency you need . Something like https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner works well for a lab or non-prod cluster. You could even use something like this in prod if you have access to highly reliably NFS mounts.
-
Recommendations for k8s storage solution
I first installed a NFS Server via this helm chart: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner Eventually I deployed Longhorn cause I needed expandable volumes, which the first repo doesn't support. I guess for best performance you should go for a ceph cluster, but I'm not an expert.
-
Move to K8s for hosting at home?
I used the NFS provisioner for persistent volumes until I got the Ceph side up and running. I created a share on my NAS specifically for k8s. It worked very well and had the bonus of being just a regular file system that you could browse/edit easily (just place files in or edit config). I would agree with not moving plex into k8s. I right now just have a barebones 1 control 2 worker setup using k3s.
-
K8s - Self hosted PaaS?
However, is it too difficult to create new pods/deployments etc on your own? I find it super easy to just create a PVC (via https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner ) and create a MySQL pod in a new namespace for every micro service I create.
-
Unsure how NFS Persistent Volumes work, please help!
This is what you need https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner Point it to a folder and it will create subfolders for each PVC.
flux2
-
Self-service infrastructure as code
Given the team had already adopted GitOps and were familiar with deployments powered by Helm Releases and Flux, we wanted to move the provisioning of the infrastructure to be part of the same process of creating the service and its continuous deployment.
-
Weaveworks Is Shuting Down
Your GitHub action can trigger a helm chart, or series thereof, or other infra tools. Declarative specifications, triggered procedurally with the context of the branch’s latest build. We use this pattern quite extensively for preview app workflows.
As of a year ago this is possible in a fully declarative way with Flux 2, but there’s a lot more moving parts and security footguns - and the idea that the maintenance of this project has lost one of its primary sponsors is worrying at best.
https://github.com/fluxcd/flux2/discussions/831
https://blog.kluctl.io/introducing-the-template-controller-a...
-
10 Ways for Kubernetes Declarative Configuration Management
FluxCD - FluxCD is another popular GitOps tool that allows developers to use a Git repository as the sole source of configuration. Flux automatically ensures that the state of the Kubernetes cluster is synchronized with the configuration in the Git repository. It supports automatic updates, meaning Flux can monitor Docker image repositories for new images and push updates to the cluster.
-
SmartCash Project - GitOps with FluxCD
#!/bin/bash aws eks update-kubeconfig --name $CLUSTER_NAME --region $AWS_REGION flux_installed=$(kubectl api-resources | grep flux) if [ -z "$flux_installed" ]; then echo "flux is not installed" curl -s https://fluxcd.io/install.sh | sudo bash flux bootstrap github \ --owner=$GH_USER_NAME \ --repository=$FLUX_REPO_NAME \ --path="clusters/$ENVIRONMENT/$CLUSTER_NAME/bootstrap" \ --branch=main \ --personal else echo "flux is installed" fi
-
Best Kubernetes DevOps Tools: A Comprehensive Guide
Flux CD enables continuous deployment to Kubernetes through GitOps by syncing Git repositories with Kubernetes clusters. Flux CD enables GitOps for Kubernetes through source control integration. It manages Kubernetes manifests as code and syncs git repo changes to clusters. Flux automates checks, deployments, and updates within clusters.
- Flux – a tool for keeping K8s clusters in sync with sources of configuration
-
Git going with GitOps on AKS: A Step-by-Step Guide using FluxCD AKS Extension
FluxCD is a GitOps tool developed by Weaveworks that allows you to implement continuous and progressive delivery of your applications on Kubernetes. It is a CNCF graduated project that offers a set of controllers to monitor Git repositories and reconciles the cluster's actual state with the desired state defined by manifests committed in the repo.
-
Ultimate EKS Baseline Cluster: Part 1 - Provision EKS
From here, we can explore other developments and tutorials on Kubernetes, such as o11y or observability (PLG, ELK, ELF, TICK, Jaeger, Pyroscope), service mesh (Linkerd, Istio, NSM, Consul Connect, Cillium), and progressive delivery (ArgoCD, FluxCD, Spinnaker).
-
Reducing Cloud Costs on Kubernetes Dev Envs
Instead, we will create a single long-lived cluster, and deploy our application in different namespaces. There are a bunch of ways to do that - see ArgoCD, Flux, custom internal tooling, or other solutions (we use our own product). That way, we:
-
What is the proper, kubernetes native way of working with multiple clusters for DR, HA?
One is to make sure configurations in both clusters is same. And for that there are many tools like fluxcd or projectsveltos
What are some alternatives?
csi-driver-nfs - This driver allows Kubernetes to access NFS server on Linux node.
helmfile - Deploy Kubernetes Helm Charts
longhorn - Cloud-Native distributed storage built on and for Kubernetes
argo-cd - Declarative Continuous Deployment for Kubernetes
nfs-ganesha-server-and-external-provisioner - NFS Ganesha Server and Volume Provisioner.
spinnaker - Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.
csi-s3 - A Container Storage Interface for S3
terraform-provider-flux - Terraform provider for bootstrapping Flux
csi-driver-smb - This driver allows Kubernetes to access SMB Server on both Linux and Windows nodes.
skaffold - Easy and Repeatable Kubernetes Development
kadalu - A lightweight Persistent storage solution for Kubernetes / OpenShift / Nomad using GlusterFS in background. More information at https://kadalu.tech
werf - A solution for implementing efficient and consistent software delivery to Kubernetes facilitating best practices.