argocd-autopilot VS argo-cd

Compare argocd-autopilot vs argo-cd and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
argocd-autopilot argo-cd
22 72
840 16,081
3.5% 3.2%
7.8 9.9
6 days ago 6 days ago
Go Go
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

argocd-autopilot

Posts with mentions or reviews of argocd-autopilot. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-23.
  • Setup Argocd-Autopilot from scratch
    1 project | /r/ArgoCD | 6 Jun 2023
  • Is there a better way?
    1 project | /r/Terraform | 25 May 2023
    # get the nodes in the cluster data "proxmox_virtual_environment_nodes" "proxmox_nodes" {} # VM Definition resource "proxmox_virtual_environment_vm" "example" { count = var.vm_count name = count.index + 1 <= var.vm_masters ? "${var.vm_name}-master-${format("%02d", count.index + 1)}" : "${var.vm_name}-worker-${format("%02d", count.index - (var.vm_masters - 1))}" node_name = data.proxmox_virtual_environment_nodes.proxmox_nodes.names[count.index % length(data.proxmox_virtual_environment_nodes.proxmox_nodes.names)] vm_id = count.index + 1 <= var.vm_masters ? var.vm_proxmox_id + count.index : var.vm_proxmox_id + count.index + (var.vm_proxmox_id_offset - var.vm_masters) tags = sort(concat(var.vm_proxmox_tags, [count.index + 1 <= var.vm_masters ? "master" : "worker"] )) agent { enabled = true trim = true } cpu { sockets = var.vm_sockets cores = var.vm_cores } memory { dedicated = count.index + 1 <= var.vm_masters ? var.vm_mem_master : var.vm_mem_worker } disk { interface = "scsi0" datastore_id = var.clone_target_local ? var.clone_target_datastore_local : var.clone_target_datastore_nfs ssd = true size = count.index + 1 <= var.vm_masters ? var.vm_disk_size_master : var.vm_disk_size_worker iothread = true discard = "on" } network_device { model = "virtio" mac_address = count.index + 1 <= var.vm_masters ? "${var.net_mac_address_base}AA:${format("%02d", count.index)}" : "${var.net_mac_address_base}BB:${format("%02d", count.index - var.vm_masters)}" # vlan_id = var.net_vlan_id # Not needed since using dedicated interface bridge = var.net_bridge } serial_device {} # clone information clone { vm_id = var.clone_target_local ? var.clone_vm_id + (count.index % var.vm_masters) : var.clone_vm_id datastore_id = var.clone_target_local ? var.clone_target_datastore_local : var.clone_target_datastore_nfs node_name = var.clone_target_local ? data.proxmox_virtual_environment_nodes.proxmox_nodes.names[count.index % length(data.proxmox_virtual_environment_nodes.proxmox_nodes.names)]: data.proxmox_virtual_environment_nodes.proxmox_nodes.names[0] } # had to add a wait for agent to come alive provisioner "remote-exec" { inline = [ "sudo cloud-init status --wait", "sudo systemctl start qemu-guest-agent", ] connection { type = "ssh" agent = false port = 22 host = element(element(self.ipv4_addresses, index(self.network_interface_names, "eth0")), 0) private_key = file(var.public_key_path) user = var.vm_username } } } # Create file for ansible inventory resource "local_file" "k3s_file" { content = templatefile( "${path.module}/templates/inventory_ansible.tftpl", { ansible_masters = "${join("\n", [for vm in slice(proxmox_virtual_environment_vm.example, 0, var.vm_masters) : join("", [vm.ipv4_addresses[1][0] ])])}" ansible_nodes = "${join("\n", [for vm in slice(proxmox_virtual_environment_vm.example, var.vm_masters , var.vm_count) : join("", [vm.ipv4_addresses[1][0] ])])}" } ) filename = "${path.module}/../ansible-k3s/inventory/k3s-cluster/hosts.ini" } #connecting to the Ansible control node and call ansible playbook to build the k3s cluster resource "null_resource" "call-ansible" { provisioner "local-exec" { command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ${path.module}/../ansible-k3s/site.yml -i ${path.module}/../ansible-k3s/inventory/k3s-cluster/hosts.ini" } depends_on = [ local_file.k3s_file ] } #Copy the kubectl file locally so we can issue commands against the cluster resource "null_resource" "copy-kubeconfig" { provisioner "local-exec" { command = "scp -o 'StrictHostKeyChecking no' seb@${proxmox_virtual_environment_vm.example[0].ipv4_addresses[1][0]}:~/.kube/config ~/.kube/config " } depends_on = [ null_resource.call-ansible ] } #bootstrap the cluster with argocd-autopilot resource "null_resource" "argocd-autopilot" { provisioner "local-exec" { command = ( var.first_install ? "argocd-autopilot repo bootstrap --repo ${var.github_repo} -t ${var.github_token} --app https://github.com/argoproj-labs/argocd-autopilot/manifests/ha" : "argocd-autopilot repo bootstrap --recover --app ${var.github_repo}.git/bootstrap/argo-cd" ) } depends_on = [ null_resource.copy-kubeconfig ] }
  • Setting up ArgoCD from scratch
    4 projects | /r/ArgoCD | 23 May 2023
  • Declarative GitOps for...my ArgoCD itself?
    3 projects | /r/kubernetes | 9 Mar 2023
    I use Argo CD Autopilot which bootstraps Argo CD in a self-managing structure. If nothing else, copy the repo structure https://github.com/argoproj-labs/argocd-autopilot
  • How to Install and Upgrade Argo CD
    2 projects | dev.to | 17 Jan 2023
    We use the same approach internally and we fully open-sourced our solution at https://argocd-autopilot.readthedocs.io/en/stable/
  • Argocd kustomize repository structure
    1 project | /r/kubernetes | 22 Dec 2022
  • Argo CD for Beginners ๐Ÿ™
    4 projects | dev.to | 18 Dec 2022
    I recommend utilising Autopilot a companion project that not only installs Argo CD but also commits all configurations to git so Argo CD can maintain itself using GitOps.
  • ArgoCD installation
    3 projects | /r/kubernetes | 5 Oct 2022
    Check https://argocd-autopilot.readthedocs.io/en/stable/ It is an installer that does exactly that. It installs ArgoCD, sets it up to manage itself and offers a suggested bootstrap for your applications
  • How to set up a repo of repos for argo gitops?
    4 projects | /r/devops | 30 Sep 2022
    Checkout the ArgoCD autopilot if you're using kustomize rather than helm
  • Suggestion for Gitlab pipelines with ArgoCD
    2 projects | /r/devops | 25 Aug 2022

argo-cd

Posts with mentions or reviews of argo-cd. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-19.
  • ArgoCD Deployment on RKE2 with Cilium Gateway API
    2 projects | dev.to | 19 Feb 2024
    The code above will create the argocd Kubernetes namespace and deploy the latest stable manifest. If you would like to install a specific manifest, have a look here.
  • 5-Step Approach: Projectsveltos for Kubernetes add-on deployment and management onย RKE2
    5 projects | dev.to | 18 Dec 2023
    In this blog post, we will demonstrate how easy and fast it is to deploy Sveltos on an RKE2 cluster with the help of ArgoCD, register two RKE2 Cluster API (CAPI) clusters and create a ClusterProfile to deploy Prometheus and Grafana Helm charts down the managed CAPI clusters.
  • 14 DevOps and SRE Tools for 2024: Your Ultimate Guide to Stay Ahead
    10 projects | dev.to | 4 Dec 2023
    Argo CD
  • Implementing GitOps with Argo CD, GitHub, and Azure Kubernetes Service
    1 project | dev.to | 13 Nov 2023
    $version = (Invoke-RestMethod https://api.github.com/repos/argoproj/argo-cd/releases/latest).tag_name Invoke-WebRequest -Uri "https://github.com/argoproj/argo-cd/releases/download/$version/argocd-windows-amd64.exe" -OutFile "argocd.exe"
  • Verto.sh: A New Hub Connecting Beginners with Open-Source Projects
    2 projects | news.ycombinator.com | 26 Oct 2023
    This is cool - I can think of some projects that are amazing as first contributors, and others I can think of that are terrible.

    One thing I think the tool doesn't address is why someone should contribute to a particular project. Having stars is interesting, and a proxy for at least historical activity, but also kind of useless here - take argoproj/argo-cd [1] as an example - 14.5k stars, with a backlog of 2.7k issues and an issue tracker that's a real mess.

    Either way, I think this tool is neat for trying to gain some experience in a project purely based on language.

    [1] https://github.com/argoproj/argo-cd/issues?q=is%3Aissue+is%3...

  • Sharding the Clusters across Argo CD Application Controller Replicas
    1 project | dev.to | 4 Oct 2023
    In our case, our team went ahead with Solution B, as that was the only solution present when the issue occurred. However, with the release of Argo CD 2.8.0 (released on August 7, 2023), things have changed - for the better :). Now, there are two ways to handle the sharding issue with the Argo CD Application Controller:
  • Real Time DevOps Project | Deploy to Kubernetes Using Jenkins | End to End DevOps Project | CICD
    4 projects | dev.to | 29 Sep 2023
    $ kubectl create namespace argocd //Next, let's apply the yaml configuration files for ArgoCd $ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml //Now we can view the pods created in the ArgoCD namespace. $ kubectl get pods -n argocd //To interact with the API Server we need to deploy the CLI: $ curl --silent --location -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v2.4.7/argocd-linux-amd64 $ chmod +x /usr/local/bin/argocd //Expose argocd-server $ kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}' //Wait about 2 minutes for the LoadBalancer creation $ kubectl get svc -n argocd //Get pasword and decode it. $ kubectl get secret argocd-initial-admin-secret -n argocd -o yaml $ echo WXVpLUg2LWxoWjRkSHFmSA== | base64 --decode
  • Ultimate EKS Baseline Cluster: Part 1 - Provision EKS
    17 projects | dev.to | 21 Jul 2023
    From here, we can explore other developments and tutorials on Kubernetes, such as o11y or observability (PLG, ELK, ELF, TICK, Jaeger, Pyroscope), service mesh (Linkerd, Istio, NSM, Consul Connect, Cillium), and progressive delivery (ArgoCD, FluxCD, Spinnaker).
  • FluxCD vs Weaveworks
    1 project | /r/devops | 1 May 2023
    lol! Wham! Third choice! https://github.com/argoproj/argo-cd
  • Helm Template Command
    1 project | /r/argoproj | 26 Apr 2023
    If you mean for each app, I don't think it's listed anywhere though you may find it in `repo-server` logs. Like so

What are some alternatives?

When comparing argocd-autopilot and argo-cd you can also consider the following projects:

argocd-example-apps - Example Apps to Demonstrate Argo CD

drone - Gitness is an Open Source developer platform with Source Control management, Continuous Integration and Continuous Delivery. [Moved to: https://github.com/harness/gitness]

argocd-image-updater - Automatic container image update for Argo CD

flagger - Progressive delivery Kubernetes operator (Canary, A/B Testing and Blue/Green deployments)

Helm-Chart-Boilerplates - Example implementations of the universal helm charts

Jenkins - Jenkins automation server

website - ๐ŸŒ Source code for OpenGitOps website

terraform-controller - Use K8s to Run Terraform

HomeBrew - ๐Ÿบ The missing package manager for macOS (or Linux)

werf - A solution for implementing efficient and consistent software delivery to Kubernetes facilitating best practices.

gitops-workloads-demo - Demonstrate how Argo ApplicationSets work

atlantis - Terraform Pull Request Automation