nfs-ganesha-server-and-external-provisioner
argo-cd
nfs-ganesha-server-and-external-provisioner | argo-cd | |
---|---|---|
5 | 72 | |
397 | 16,143 | |
1.3% | 1.4% | |
3.1 | 9.9 | |
3 months ago | 6 days ago | |
Shell | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nfs-ganesha-server-and-external-provisioner
- Alternative to Longhorn RWX?
-
How to Deploy and Scale Strapi on a Kubernetes Cluster 2/2
Now, for the purposes of this article, in case you don't have an NFS server available, we will use a simple NFS Server Provisioner, which we'll use only for example purposes. As mentioned before, using a managed solution from a cloud provider or a properly configured HA NFS server in your infrastructure is highly recommended. We'll install not the most up-to-date solution, but it should work for example purposes. We will follow the Quickstart found in the repo, mixed with this repo which does some small tweaks to make it work with K3d, which is summarized in the following commands run from the helm folder:
-
How to scale nginx pod when pod is mounting a volume
Some people just setup an NFS share. There's one that uses existing NFS and another that also provides NFS. This becomes a single point of failure though.
-
NFS volume mount on Kubernetes
Conceptually to attach your storage to your pod, you have to go through 2 objects, the PVC that attaches to the PV, which itself must have a physical support, so the nfs mount on your nodes in hostpath, which is globally disgusting, it is better to inform the NFS server in your PV. Maybe I'm wrong but it seems clear to me. However, if you ask this kind of questions, you might be missing two or three things about K8. I advise you to read the documentation about PV, PVC, SC etc... Also NFS is not POSIX and by nature slow, which can cause inconsistencies in your data, but this is an extreme case. In a logic of automation you can use this: https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner Help yourself with this . https://www.linuxtechi.com/configure-nfs-persistent-volume-kubernetes/
-
NFS server provisioner deprecated - what's the replacement?
I found something similar that seems to be a continuation of the nfs-server-provisioner- https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner
argo-cd
-
ArgoCD Deployment on RKE2 with Cilium Gateway API
The code above will create the argocd Kubernetes namespace and deploy the latest stable manifest. If you would like to install a specific manifest, have a look here.
-
5-Step Approach: Projectsveltos for Kubernetes add-on deployment and management on RKE2
In this blog post, we will demonstrate how easy and fast it is to deploy Sveltos on an RKE2 cluster with the help of ArgoCD, register two RKE2 Cluster API (CAPI) clusters and create a ClusterProfile to deploy Prometheus and Grafana Helm charts down the managed CAPI clusters.
-
14 DevOps and SRE Tools for 2024: Your Ultimate Guide to Stay Ahead
Argo CD
-
Implementing GitOps with Argo CD, GitHub, and Azure Kubernetes Service
$version = (Invoke-RestMethod https://api.github.com/repos/argoproj/argo-cd/releases/latest).tag_name Invoke-WebRequest -Uri "https://github.com/argoproj/argo-cd/releases/download/$version/argocd-windows-amd64.exe" -OutFile "argocd.exe"
-
Verto.sh: A New Hub Connecting Beginners with Open-Source Projects
This is cool - I can think of some projects that are amazing as first contributors, and others I can think of that are terrible.
One thing I think the tool doesn't address is why someone should contribute to a particular project. Having stars is interesting, and a proxy for at least historical activity, but also kind of useless here - take argoproj/argo-cd [1] as an example - 14.5k stars, with a backlog of 2.7k issues and an issue tracker that's a real mess.
Either way, I think this tool is neat for trying to gain some experience in a project purely based on language.
[1] https://github.com/argoproj/argo-cd/issues?q=is%3Aissue+is%3...
-
Sharding the Clusters across Argo CD Application Controller Replicas
In our case, our team went ahead with Solution B, as that was the only solution present when the issue occurred. However, with the release of Argo CD 2.8.0 (released on August 7, 2023), things have changed - for the better :). Now, there are two ways to handle the sharding issue with the Argo CD Application Controller:
-
Real Time DevOps Project | Deploy to Kubernetes Using Jenkins | End to End DevOps Project | CICD
$ kubectl create namespace argocd //Next, let's apply the yaml configuration files for ArgoCd $ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml //Now we can view the pods created in the ArgoCD namespace. $ kubectl get pods -n argocd //To interact with the API Server we need to deploy the CLI: $ curl --silent --location -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v2.4.7/argocd-linux-amd64 $ chmod +x /usr/local/bin/argocd //Expose argocd-server $ kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}' //Wait about 2 minutes for the LoadBalancer creation $ kubectl get svc -n argocd //Get pasword and decode it. $ kubectl get secret argocd-initial-admin-secret -n argocd -o yaml $ echo WXVpLUg2LWxoWjRkSHFmSA== | base64 --decode
-
Ultimate EKS Baseline Cluster: Part 1 - Provision EKS
From here, we can explore other developments and tutorials on Kubernetes, such as o11y or observability (PLG, ELK, ELF, TICK, Jaeger, Pyroscope), service mesh (Linkerd, Istio, NSM, Consul Connect, Cillium), and progressive delivery (ArgoCD, FluxCD, Spinnaker).
-
FluxCD vs Weaveworks
lol! Wham! Third choice! https://github.com/argoproj/argo-cd
-
Helm Template Command
If you mean for each app, I don't think it's listed anywhere though you may find it in `repo-server` logs. Like so
What are some alternatives?
nfs-subdir-external-provisioner - Dynamic sub-dir volume provisioner on a remote NFS server.
drone - Gitness is an Open Source developer platform with Source Control management, Continuous Integration and Continuous Delivery. [Moved to: https://github.com/harness/gitness]
longhorn - Cloud-Native distributed storage built on and for Kubernetes
flagger - Progressive delivery Kubernetes operator (Canary, A/B Testing and Blue/Green deployments)
csi-s3 - A Container Storage Interface for S3
Jenkins - Jenkins automation server
csi-driver-nfs - This driver allows Kubernetes to access NFS server on Linux node.
terraform-controller - Use K8s to Run Terraform
GlusterFS - Gluster Filesystem : Build your distributed storage in minutes
werf - A solution for implementing efficient and consistent software delivery to Kubernetes facilitating best practices.
local-path-provisioner - Dynamically provisioning persistent local storage with Kubernetes
atlantis - Terraform Pull Request Automation