SaaSHub helps you find the best software and product alternatives Learn more β
Argo-helm Alternatives
Similar projects and alternatives to argo-helm
-
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
-
-
-
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
cloudnative-pg
CloudNativePG is a comprehensive platform designed to seamlessly manage PostgreSQL databases within Kubernetes environments, covering the entire operational lifecycle from initial deployment to ongoing maintenance
-
kftray
π¦ β‘ kubectl port forward manager, with support for UDP and proxy connections through k8s clusters
-
-
argocd-vault-plugin
An Argo CD plugin to retrieve secrets from Secret Management tools and inject them into Kubernetes secrets
-
-
-
cp-helm-charts
Discontinued The Confluent Platform Helm charts enable you to deploy Confluent Platform services on Kubernetes for development, test, and proof of concept environments.
-
-
-
-
-
arogcd-vault-plugin-with-helm
Repository contains configuration resources to setup secret injections from Vault into Helm charts with ArgoCD
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
argo-helm discussion
argo-helm reviews and mentions
-
Terraform from 0 to Hero
provider "azurerm" { features {} } module "aks" { source = "[email protected]:flavius-dinu/terraform-az-aks.git?ref=v1.0.3" kube_params = { kube1 = { name = "kube1" rg_name = "rg1" rg_location = "westeurope" dns_prefix = "kube" identity = [{}] enable_auto_scaling = false node_count = 1 np_name = "kube1" export_kube_config = true kubeconfig_path = "./config" } } } provider "helm" { kubernetes { config_path = module.aks.kube_config_path["kube1"] } } # Alternative way of declaring the provider # provider "helm" { # kubernetes { # host = module.aks.kube_config["kube1"].0.host # username = module.aks.kube_config["kube1"].0.username # password = module.aks.kube_config["kube1"].0.password # client_certificate = base64decode(module.aks.kube_config["kube1"].0.client_certificate) # client_key = base64decode(module.aks.kube_config["kube1"].0.client_key) # cluster_ca_certificate = base64decode(module.aks.kube_config["kube1"].0.cluster_ca_certificate) # } # } module "helm" { source = "[email protected]:flavius-dinu/terraform-helm-release.git?ref=v1.0.0" helm = { argo = { name = "argocd" repository = "https://argoproj.github.io/argo-helm" chart = "argo-cd" create_namespace = true namespace = "argocd" } } }
-
Bootstrapping ArgoCD with Helm and Helmfile
repositories: - name: argo url: https://argoproj.github.io/argo-helm releases: - name: argocd namespace: argocd createNamespace: true chart: argo/argo-cd version: ~5.46.0 # Specify the desired version values: - values/argocd-values.yaml
-
Local Kubernetes Cluster - External traffic without Ingress Using Kftray
locals { services = { argocd = { namespace = "argocd" repository = "https://argoproj.github.io/argo-helm" chart = "argo-cd" version = var.argocd_chart_version kftray = { server = { alias = "argocd" local_port = "16080" target_port = "http" } } } # ... other services ... } services_values = { for service_name, service in local.services : service_name => templatefile("${path.module}/templates/${service_name}-values.yaml.tpl", { kftray = service.kftray }) } }
-
Installing ArgoCD and Securing Access Using Amazon Cognito
resource "helm_release" "argocd" { name = "argocd" repository = "https://argoproj.github.io/argo-helm" chart = "argo-cd" namespace = "argocd" create_namespace = true version = "4.0.0" values = [file("./argo.yaml")] }
-
GitOps + ArgoCD: A Perfect Match for Kubernetes Continuous Delivery
# Ensure you're in the Kind cluster. This command should return 'kind-gitops-argocd' context. kubectl config current-context # Add the ArgoCD Helm repository helm repo add argo https://argoproj.github.io/argo-helm # Update the local Helm chart cache helm repo update # Install the ArgoCD Helm chart helm install argocd --namespace argocd --create-namespace argo/argo-cd # Create context for the ArgoCD namespace kubectl config set-context kind-ns-argocd --namespace argocd --cluster kind-gitops-argocd --user kind-gitops-argocd # Set the current context for the argocd namespace kubectl config use-context kind-ns-argocd # Grant cluster-admin role to the ArgoCD service account (use with caution in production) kubectl apply -f argocd/rbac/argocd-svc-account-clusterrole-admin-binding.yaml # Get the admin password via kubectl kubectl get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d # Access the ArgoCD UI (http://localhost:8080) using 'admin' as the username and the copied password kubectl port-forward service/argocd-server 8080:443
-
Installing multiple helm charts in one go [Approach 3 - using simple bash utility]
dry_run: false create_namespace: true wait: false timeout: false # If true, defaults to 20 mins charts: - release_name: nginx chart_name: nginx chart_repo: oci://registry-1.docker.io/bitnamicharts values_file: values/nginx-values.yaml - release_name: argocd chart_name: argo-cd chart_repo: https://argoproj.github.io/argo-helm values_file: values/argo-cd.yaml version: 6.4.0 namespace: argo-cd
-
Github as Helm repository
$ helm repo add boris https://boris.github.io/kubernetes/helm/charts $ helm repo list NAME URL ealenn https://ealenn.github.io/charts bitnami https://charts.bitnami.com/bitnami kubernetes-dashboard https://kubernetes.github.io/dashboard/ argo https://argoproj.github.io/argo-helm boris https://boris.github.io/kubernetes/helm/charts/ $ helm install mychart boris/mychart
-
Using ArgoCD & Terraform to Manage Kubernetes Cluster
data "aws_eks_cluster_auth" "main" { name = aws_eks_cluster.main.name } resource "helm_release" "argocd" { depends_on = [aws_eks_node_group.main] name = "argocd" repository = "https://argoproj.github.io/argo-helm" chart = "argo-cd" version = "4.5.2" namespace = "argocd" create_namespace = true set { name = "server.service.type" value = "LoadBalancer" } set { name = "server.service.annotations.service\\.beta\\.kubernetes\\.io/aws-load-balancer-type" value = "nlb" } } data "kubernetes_service" "argocd_server" { metadata { name = "argocd-server" namespace = helm_release.argocd.namespace } }
-
ArgoCD: Use of Risky or Missing Cryptographic Algorithms in Redis Cache
FWIW: The Helm chart has network policy in place:
https://github.com/argoproj/argo-helm/blob/main/charts/argo-...
If you're using a CNI that supports network policy (e.g. AWS VPC CNI on EKS, Calico, etc.), I think this should more or less cover you, but I haven't personally tested it.
I think it's also probably a better practice to install "control plane" type software like Argo on a different, dedicated cluster. Argo supports this concept (and can in fact manage deployments in multiple clusters remotely). This way your main mission workloads are completely segmented from your privileged control plane software. Just as another defense-in-depth measure
-
Using ArgoCD Image Updater with ACR
resource "helm_release" "image_updater" { name = "argocd-image-updater" repository = "https://argoproj.github.io/argo-helm" chart = "argocd-image-updater" namespace = "argocd" values = [ <
-
A note from our sponsor - SaaSHub
www.saashub.com | 17 Mar 2025
Stats
argoproj/argo-helm is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of argo-helm is Mustache.