argocd-autopilot
flux2
Our great sponsors
argocd-autopilot | flux2 | |
---|---|---|
22 | 83 | |
840 | 5,927 | |
3.5% | 3.1% | |
7.8 | 9.2 | |
6 days ago | 3 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
argocd-autopilot
- Setup Argocd-Autopilot from scratch
-
Is there a better way?
# get the nodes in the cluster data "proxmox_virtual_environment_nodes" "proxmox_nodes" {} # VM Definition resource "proxmox_virtual_environment_vm" "example" { count = var.vm_count name = count.index + 1 <= var.vm_masters ? "${var.vm_name}-master-${format("%02d", count.index + 1)}" : "${var.vm_name}-worker-${format("%02d", count.index - (var.vm_masters - 1))}" node_name = data.proxmox_virtual_environment_nodes.proxmox_nodes.names[count.index % length(data.proxmox_virtual_environment_nodes.proxmox_nodes.names)] vm_id = count.index + 1 <= var.vm_masters ? var.vm_proxmox_id + count.index : var.vm_proxmox_id + count.index + (var.vm_proxmox_id_offset - var.vm_masters) tags = sort(concat(var.vm_proxmox_tags, [count.index + 1 <= var.vm_masters ? "master" : "worker"] )) agent { enabled = true trim = true } cpu { sockets = var.vm_sockets cores = var.vm_cores } memory { dedicated = count.index + 1 <= var.vm_masters ? var.vm_mem_master : var.vm_mem_worker } disk { interface = "scsi0" datastore_id = var.clone_target_local ? var.clone_target_datastore_local : var.clone_target_datastore_nfs ssd = true size = count.index + 1 <= var.vm_masters ? var.vm_disk_size_master : var.vm_disk_size_worker iothread = true discard = "on" } network_device { model = "virtio" mac_address = count.index + 1 <= var.vm_masters ? "${var.net_mac_address_base}AA:${format("%02d", count.index)}" : "${var.net_mac_address_base}BB:${format("%02d", count.index - var.vm_masters)}" # vlan_id = var.net_vlan_id # Not needed since using dedicated interface bridge = var.net_bridge } serial_device {} # clone information clone { vm_id = var.clone_target_local ? var.clone_vm_id + (count.index % var.vm_masters) : var.clone_vm_id datastore_id = var.clone_target_local ? var.clone_target_datastore_local : var.clone_target_datastore_nfs node_name = var.clone_target_local ? data.proxmox_virtual_environment_nodes.proxmox_nodes.names[count.index % length(data.proxmox_virtual_environment_nodes.proxmox_nodes.names)]: data.proxmox_virtual_environment_nodes.proxmox_nodes.names[0] } # had to add a wait for agent to come alive provisioner "remote-exec" { inline = [ "sudo cloud-init status --wait", "sudo systemctl start qemu-guest-agent", ] connection { type = "ssh" agent = false port = 22 host = element(element(self.ipv4_addresses, index(self.network_interface_names, "eth0")), 0) private_key = file(var.public_key_path) user = var.vm_username } } } # Create file for ansible inventory resource "local_file" "k3s_file" { content = templatefile( "${path.module}/templates/inventory_ansible.tftpl", { ansible_masters = "${join("\n", [for vm in slice(proxmox_virtual_environment_vm.example, 0, var.vm_masters) : join("", [vm.ipv4_addresses[1][0] ])])}" ansible_nodes = "${join("\n", [for vm in slice(proxmox_virtual_environment_vm.example, var.vm_masters , var.vm_count) : join("", [vm.ipv4_addresses[1][0] ])])}" } ) filename = "${path.module}/../ansible-k3s/inventory/k3s-cluster/hosts.ini" } #connecting to the Ansible control node and call ansible playbook to build the k3s cluster resource "null_resource" "call-ansible" { provisioner "local-exec" { command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ${path.module}/../ansible-k3s/site.yml -i ${path.module}/../ansible-k3s/inventory/k3s-cluster/hosts.ini" } depends_on = [ local_file.k3s_file ] } #Copy the kubectl file locally so we can issue commands against the cluster resource "null_resource" "copy-kubeconfig" { provisioner "local-exec" { command = "scp -o 'StrictHostKeyChecking no' seb@${proxmox_virtual_environment_vm.example[0].ipv4_addresses[1][0]}:~/.kube/config ~/.kube/config " } depends_on = [ null_resource.call-ansible ] } #bootstrap the cluster with argocd-autopilot resource "null_resource" "argocd-autopilot" { provisioner "local-exec" { command = ( var.first_install ? "argocd-autopilot repo bootstrap --repo ${var.github_repo} -t ${var.github_token} --app https://github.com/argoproj-labs/argocd-autopilot/manifests/ha" : "argocd-autopilot repo bootstrap --recover --app ${var.github_repo}.git/bootstrap/argo-cd" ) } depends_on = [ null_resource.copy-kubeconfig ] }
- Setting up ArgoCD from scratch
-
Declarative GitOps for...my ArgoCD itself?
I use Argo CD Autopilot which bootstraps Argo CD in a self-managing structure. If nothing else, copy the repo structure https://github.com/argoproj-labs/argocd-autopilot
-
How to Install and Upgrade Argo CD
We use the same approach internally and we fully open-sourced our solution at https://argocd-autopilot.readthedocs.io/en/stable/
- Argocd kustomize repository structure
-
Argo CD for Beginners 🐙
I recommend utilising Autopilot a companion project that not only installs Argo CD but also commits all configurations to git so Argo CD can maintain itself using GitOps.
-
ArgoCD installation
Check https://argocd-autopilot.readthedocs.io/en/stable/ It is an installer that does exactly that. It installs ArgoCD, sets it up to manage itself and offers a suggested bootstrap for your applications
-
How to set up a repo of repos for argo gitops?
Checkout the ArgoCD autopilot if you're using kustomize rather than helm
- Suggestion for Gitlab pipelines with ArgoCD
flux2
-
Self-service infrastructure as code
Given the team had already adopted GitOps and were familiar with deployments powered by Helm Releases and Flux, we wanted to move the provisioning of the infrastructure to be part of the same process of creating the service and its continuous deployment.
-
Weaveworks Is Shuting Down
Your GitHub action can trigger a helm chart, or series thereof, or other infra tools. Declarative specifications, triggered procedurally with the context of the branch’s latest build. We use this pattern quite extensively for preview app workflows.
As of a year ago this is possible in a fully declarative way with Flux 2, but there’s a lot more moving parts and security footguns - and the idea that the maintenance of this project has lost one of its primary sponsors is worrying at best.
https://github.com/fluxcd/flux2/discussions/831
https://blog.kluctl.io/introducing-the-template-controller-a...
-
10 Ways for Kubernetes Declarative Configuration Management
FluxCD - FluxCD is another popular GitOps tool that allows developers to use a Git repository as the sole source of configuration. Flux automatically ensures that the state of the Kubernetes cluster is synchronized with the configuration in the Git repository. It supports automatic updates, meaning Flux can monitor Docker image repositories for new images and push updates to the cluster.
-
SmartCash Project - GitOps with FluxCD
#!/bin/bash aws eks update-kubeconfig --name $CLUSTER_NAME --region $AWS_REGION flux_installed=$(kubectl api-resources | grep flux) if [ -z "$flux_installed" ]; then echo "flux is not installed" curl -s https://fluxcd.io/install.sh | sudo bash flux bootstrap github \ --owner=$GH_USER_NAME \ --repository=$FLUX_REPO_NAME \ --path="clusters/$ENVIRONMENT/$CLUSTER_NAME/bootstrap" \ --branch=main \ --personal else echo "flux is installed" fi
-
Best Kubernetes DevOps Tools: A Comprehensive Guide
Flux CD enables continuous deployment to Kubernetes through GitOps by syncing Git repositories with Kubernetes clusters. Flux CD enables GitOps for Kubernetes through source control integration. It manages Kubernetes manifests as code and syncs git repo changes to clusters. Flux automates checks, deployments, and updates within clusters.
- Flux – a tool for keeping K8s clusters in sync with sources of configuration
-
Git going with GitOps on AKS: A Step-by-Step Guide using FluxCD AKS Extension
FluxCD is a GitOps tool developed by Weaveworks that allows you to implement continuous and progressive delivery of your applications on Kubernetes. It is a CNCF graduated project that offers a set of controllers to monitor Git repositories and reconciles the cluster's actual state with the desired state defined by manifests committed in the repo.
-
Ultimate EKS Baseline Cluster: Part 1 - Provision EKS
From here, we can explore other developments and tutorials on Kubernetes, such as o11y or observability (PLG, ELK, ELF, TICK, Jaeger, Pyroscope), service mesh (Linkerd, Istio, NSM, Consul Connect, Cillium), and progressive delivery (ArgoCD, FluxCD, Spinnaker).
-
Reducing Cloud Costs on Kubernetes Dev Envs
Instead, we will create a single long-lived cluster, and deploy our application in different namespaces. There are a bunch of ways to do that - see ArgoCD, Flux, custom internal tooling, or other solutions (we use our own product). That way, we:
-
What is the proper, kubernetes native way of working with multiple clusters for DR, HA?
One is to make sure configurations in both clusters is same. And for that there are many tools like fluxcd or projectsveltos
What are some alternatives?
argocd-example-apps - Example Apps to Demonstrate Argo CD
helmfile - Deploy Kubernetes Helm Charts
argo-cd - Declarative Continuous Deployment for Kubernetes
argocd-image-updater - Automatic container image update for Argo CD
spinnaker - Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.
Helm-Chart-Boilerplates - Example implementations of the universal helm charts
terraform-provider-flux - Terraform provider for bootstrapping Flux
website - 🌐 Source code for OpenGitOps website
skaffold - Easy and Repeatable Kubernetes Development
HomeBrew - 🍺 The missing package manager for macOS (or Linux)
werf - A solution for implementing efficient and consistent software delivery to Kubernetes facilitating best practices.