argocd-autopilot
argocd-image-updater
Our great sponsors
argocd-autopilot | argocd-image-updater | |
---|---|---|
22 | 21 | |
840 | 1,110 | |
3.5% | 5.3% | |
7.8 | 6.3 | |
7 days ago | 13 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
argocd-autopilot
- Setup Argocd-Autopilot from scratch
-
Is there a better way?
# get the nodes in the cluster data "proxmox_virtual_environment_nodes" "proxmox_nodes" {} # VM Definition resource "proxmox_virtual_environment_vm" "example" { count = var.vm_count name = count.index + 1 <= var.vm_masters ? "${var.vm_name}-master-${format("%02d", count.index + 1)}" : "${var.vm_name}-worker-${format("%02d", count.index - (var.vm_masters - 1))}" node_name = data.proxmox_virtual_environment_nodes.proxmox_nodes.names[count.index % length(data.proxmox_virtual_environment_nodes.proxmox_nodes.names)] vm_id = count.index + 1 <= var.vm_masters ? var.vm_proxmox_id + count.index : var.vm_proxmox_id + count.index + (var.vm_proxmox_id_offset - var.vm_masters) tags = sort(concat(var.vm_proxmox_tags, [count.index + 1 <= var.vm_masters ? "master" : "worker"] )) agent { enabled = true trim = true } cpu { sockets = var.vm_sockets cores = var.vm_cores } memory { dedicated = count.index + 1 <= var.vm_masters ? var.vm_mem_master : var.vm_mem_worker } disk { interface = "scsi0" datastore_id = var.clone_target_local ? var.clone_target_datastore_local : var.clone_target_datastore_nfs ssd = true size = count.index + 1 <= var.vm_masters ? var.vm_disk_size_master : var.vm_disk_size_worker iothread = true discard = "on" } network_device { model = "virtio" mac_address = count.index + 1 <= var.vm_masters ? "${var.net_mac_address_base}AA:${format("%02d", count.index)}" : "${var.net_mac_address_base}BB:${format("%02d", count.index - var.vm_masters)}" # vlan_id = var.net_vlan_id # Not needed since using dedicated interface bridge = var.net_bridge } serial_device {} # clone information clone { vm_id = var.clone_target_local ? var.clone_vm_id + (count.index % var.vm_masters) : var.clone_vm_id datastore_id = var.clone_target_local ? var.clone_target_datastore_local : var.clone_target_datastore_nfs node_name = var.clone_target_local ? data.proxmox_virtual_environment_nodes.proxmox_nodes.names[count.index % length(data.proxmox_virtual_environment_nodes.proxmox_nodes.names)]: data.proxmox_virtual_environment_nodes.proxmox_nodes.names[0] } # had to add a wait for agent to come alive provisioner "remote-exec" { inline = [ "sudo cloud-init status --wait", "sudo systemctl start qemu-guest-agent", ] connection { type = "ssh" agent = false port = 22 host = element(element(self.ipv4_addresses, index(self.network_interface_names, "eth0")), 0) private_key = file(var.public_key_path) user = var.vm_username } } } # Create file for ansible inventory resource "local_file" "k3s_file" { content = templatefile( "${path.module}/templates/inventory_ansible.tftpl", { ansible_masters = "${join("\n", [for vm in slice(proxmox_virtual_environment_vm.example, 0, var.vm_masters) : join("", [vm.ipv4_addresses[1][0] ])])}" ansible_nodes = "${join("\n", [for vm in slice(proxmox_virtual_environment_vm.example, var.vm_masters , var.vm_count) : join("", [vm.ipv4_addresses[1][0] ])])}" } ) filename = "${path.module}/../ansible-k3s/inventory/k3s-cluster/hosts.ini" } #connecting to the Ansible control node and call ansible playbook to build the k3s cluster resource "null_resource" "call-ansible" { provisioner "local-exec" { command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ${path.module}/../ansible-k3s/site.yml -i ${path.module}/../ansible-k3s/inventory/k3s-cluster/hosts.ini" } depends_on = [ local_file.k3s_file ] } #Copy the kubectl file locally so we can issue commands against the cluster resource "null_resource" "copy-kubeconfig" { provisioner "local-exec" { command = "scp -o 'StrictHostKeyChecking no' seb@${proxmox_virtual_environment_vm.example[0].ipv4_addresses[1][0]}:~/.kube/config ~/.kube/config " } depends_on = [ null_resource.call-ansible ] } #bootstrap the cluster with argocd-autopilot resource "null_resource" "argocd-autopilot" { provisioner "local-exec" { command = ( var.first_install ? "argocd-autopilot repo bootstrap --repo ${var.github_repo} -t ${var.github_token} --app https://github.com/argoproj-labs/argocd-autopilot/manifests/ha" : "argocd-autopilot repo bootstrap --recover --app ${var.github_repo}.git/bootstrap/argo-cd" ) } depends_on = [ null_resource.copy-kubeconfig ] }
- Setting up ArgoCD from scratch
-
Declarative GitOps for...my ArgoCD itself?
I use Argo CD Autopilot which bootstraps Argo CD in a self-managing structure. If nothing else, copy the repo structure https://github.com/argoproj-labs/argocd-autopilot
-
How to Install and Upgrade Argo CD
We use the same approach internally and we fully open-sourced our solution at https://argocd-autopilot.readthedocs.io/en/stable/
- Argocd kustomize repository structure
-
Argo CD for Beginners π
I recommend utilising Autopilot a companion project that not only installs Argo CD but also commits all configurations to git so Argo CD can maintain itself using GitOps.
-
ArgoCD installation
Check https://argocd-autopilot.readthedocs.io/en/stable/ It is an installer that does exactly that. It installs ArgoCD, sets it up to manage itself and offers a suggested bootstrap for your applications
-
How to set up a repo of repos for argo gitops?
Checkout the ArgoCD autopilot if you're using kustomize rather than helm
- Suggestion for Gitlab pipelines with ArgoCD
argocd-image-updater
- Helm or Kustomize for my situation?
-
How do you produce your images for argcd deployment
Use the ArgoCD image updater
-
What tool are you using to edit yaml graphically?
We already have software agents in GitOps ecosystem that modify the state in git. Ie, the image updater in ArgoCD. It watches the docker registry. When it sees a new image, it updates git on a user's behalf.
-
Is there a CD solution that can be (painlessly) fully automated between stages?
If you are using ArgoCD you might be able to use this: https://argocd-image-updater.readthedocs.io/en/stable/
-
How to keep track of 3d party applications helm chart on K8S?
I just found this in my travels, I have only gone so far as reading the introduction but seems to be a fit? https://github.com/argoproj-labs/argocd-image-updater
-
Is it possible that k8s updates image version on pod relaunch?
I've used Keel and more recently ArgoCD Image Updater (Using ArgoCD to manage deployments).
-
What do you use to update image tags?
https://argocd-image-updater.readthedocs.io/en/stable/ maybe?
-
Quite happy with my first 6 hour journey with ArgoCD
Use the image updater
-
How do you handle change management for a self-hosted CI/CD solution?
To answer your question you should promote docker images between environments. There are many ways to do this. Check https://argocd-image-updater.readthedocs.io/en/stable/ if you haven't seen it already.
-
What are you using for the CI part of GitOps?
For ArgoCD thereβs argocd-image-updated, https://argocd-image-updater.readthedocs.io/en/stable/
What are some alternatives?
argocd-example-apps - Example Apps to Demonstrate Argo CD
keel - Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates
argo-cd - Declarative Continuous Deployment for Kubernetes
Pulumi - Pulumi - Infrastructure as Code in any programming language. Build infrastructure intuitively on any cloud using familiar languages π
Helm-Chart-Boilerplates - Example implementations of the universal helm charts
helmfile - Deploy Kubernetes Helm Charts
website - π Source code for OpenGitOps website
updatecli - A Declarative Dependency Management tool
HomeBrew - πΊ The missing package manager for macOS (or Linux)
trivy - Find vulnerabilities, misconfigurations, secrets, SBOM in containers, Kubernetes, code repositories, clouds and more
gitops-workloads-demo - Demonstrate how Argo ApplicationSets work
applicationset - The ApplicationSet controller manages multiple Argo CD Applications as a single ApplicationSet unit, supporting deployments to large numbers of clusters, deployments of large monorepos, and enabling secure Application self-service.