cp-helm-charts
argo-helm
cp-helm-charts | argo-helm | |
---|---|---|
2 | 25 | |
778 | 1,740 | |
- | 2.0% | |
4.0 | 9.5 | |
9 months ago | 6 days ago | |
Mustache | Mustache | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cp-helm-charts
-
Using a connector with Helm-installed Kafka/Confluent
I have installed Kafka on a local Minikube by using the Helm charts https://github.com/confluentinc/cp-helm-charts following these instructions https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html like so:
-
An alternative or simpler way to event stream with or without Kafka
Now comes the challenging part. I would love to try to use Kafka to publish the events in my microservice network but geez does it become complicated there. I've found some Helm charts here which seem to be meant for development, testing, or proof of concept services but it seems to contain more than just Kafka and Zookeeper. Looking at the documentation for real production data it seems like an even more daunting task.
argo-helm
-
Local Kubernetes Cluster - External traffic without Ingress Using Kftray
locals { services = { argocd = { namespace = "argocd" repository = "https://argoproj.github.io/argo-helm" chart = "argo-cd" version = var.argocd_chart_version kftray = { server = { alias = "argocd" local_port = "16080" target_port = "http" } } } # ... other services ... } services_values = { for service_name, service in local.services : service_name => templatefile("${path.module}/templates/${service_name}-values.yaml.tpl", { kftray = service.kftray }) } }
-
Installing ArgoCD and Securing Access Using Amazon Cognito
resource "helm_release" "argocd" { name = "argocd" repository = "https://argoproj.github.io/argo-helm" chart = "argo-cd" namespace = "argocd" create_namespace = true version = "4.0.0" values = [file("./argo.yaml")] }
-
GitOps + ArgoCD: A Perfect Match for Kubernetes Continuous Delivery
# Ensure you're in the Kind cluster. This command should return 'kind-gitops-argocd' context. kubectl config current-context # Add the ArgoCD Helm repository helm repo add argo https://argoproj.github.io/argo-helm # Update the local Helm chart cache helm repo update # Install the ArgoCD Helm chart helm install argocd --namespace argocd --create-namespace argo/argo-cd # Create context for the ArgoCD namespace kubectl config set-context kind-ns-argocd --namespace argocd --cluster kind-gitops-argocd --user kind-gitops-argocd # Set the current context for the argocd namespace kubectl config use-context kind-ns-argocd # Grant cluster-admin role to the ArgoCD service account (use with caution in production) kubectl apply -f argocd/rbac/argocd-svc-account-clusterrole-admin-binding.yaml # Get the admin password via kubectl kubectl get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d # Access the ArgoCD UI (http://localhost:8080) using 'admin' as the username and the copied password kubectl port-forward service/argocd-server 8080:443
-
Installing multiple helm charts in one go [Approach 3 - using simple bash utility]
dry_run: false create_namespace: true wait: false timeout: false # If true, defaults to 20 mins charts: - release_name: nginx chart_name: nginx chart_repo: oci://registry-1.docker.io/bitnamicharts values_file: values/nginx-values.yaml - release_name: argocd chart_name: argo-cd chart_repo: https://argoproj.github.io/argo-helm values_file: values/argo-cd.yaml version: 6.4.0 namespace: argo-cd
-
Github as Helm repository
$ helm repo add boris https://boris.github.io/kubernetes/helm/charts $ helm repo list NAME URL ealenn https://ealenn.github.io/charts bitnami https://charts.bitnami.com/bitnami kubernetes-dashboard https://kubernetes.github.io/dashboard/ argo https://argoproj.github.io/argo-helm boris https://boris.github.io/kubernetes/helm/charts/ $ helm install mychart boris/mychart
-
Using ArgoCD & Terraform to Manage Kubernetes Cluster
data "aws_eks_cluster_auth" "main" { name = aws_eks_cluster.main.name } resource "helm_release" "argocd" { depends_on = [aws_eks_node_group.main] name = "argocd" repository = "https://argoproj.github.io/argo-helm" chart = "argo-cd" version = "4.5.2" namespace = "argocd" create_namespace = true set { name = "server.service.type" value = "LoadBalancer" } set { name = "server.service.annotations.service\\.beta\\.kubernetes\\.io/aws-load-balancer-type" value = "nlb" } } data "kubernetes_service" "argocd_server" { metadata { name = "argocd-server" namespace = helm_release.argocd.namespace } }
-
ArgoCD: Use of Risky or Missing Cryptographic Algorithms in Redis Cache
FWIW: The Helm chart has network policy in place:
https://github.com/argoproj/argo-helm/blob/main/charts/argo-...
If you're using a CNI that supports network policy (e.g. AWS VPC CNI on EKS, Calico, etc.), I think this should more or less cover you, but I haven't personally tested it.
I think it's also probably a better practice to install "control plane" type software like Argo on a different, dedicated cluster. Argo supports this concept (and can in fact manage deployments in multiple clusters remotely). This way your main mission workloads are completely segmented from your privileged control plane software. Just as another defense-in-depth measure
-
Using ArgoCD Image Updater with ACR
resource "helm_release" "image_updater" { name = "argocd-image-updater" repository = "https://argoproj.github.io/argo-helm" chart = "argocd-image-updater" namespace = "argocd" values = [ <
-
Introducing ArgoCD: A GitOps Approach to Continuous Deployment
kubectl create namespace argocd helm repo add argo https://argoproj.github.io/argo-helm helm repo update helm install argocd argo/argo-cd --namespace argocd
-
2- Your first ARGO-CD
We will use Helm to install Argo CD with the community-maintained chart from argoproj/argo-helm because The Argo project doesn't provide an official Helm chart. We will render thier helm chart for argocd locally on our side, manipulate it and overrides its default values, and also we can helm lint the chart and templating to see if there is some errors or not, We gonna use the chart version 5.50.0 which matches appVersion: v2.8.6 you can find all details for the chart and also we gonna override some values @ default-values.yaml
What are some alternatives?
helm-charts - Prometheus community Helm charts
charts - Public helm charts
helm-charts - OpenSourced Helm charts
pihole-kubernetes - PiHole on kubernetes
cloudnative-pg - CloudNativePG is a comprehensive platform designed to seamlessly manage PostgreSQL databases within Kubernetes environments, covering the entire operational lifecycle from initial deployment to ongoing maintenance
charts - ⚠️ Deprecated : Helm charts for applications you run at home
charts - OpenEBS Helm Charts and other utilities
helm-charts - Helm Charts for Jaeger backend
argo-cd - Declarative Continuous Deployment for Kubernetes
helm-zabbix - Helm chart for Zabbix
crd-to-sample-yaml - Generate a sample YAML file from a CRD and view it rendered on a nice website