shortlink
argocd-autopilot
shortlink | argocd-autopilot | |
---|---|---|
4 | 22 | |
677 | 843 | |
2.5% | 2.4% | |
10.0 | 7.6 | |
3 days ago | 6 days ago | |
Go | Go | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
shortlink
-
Setting up ArgoCD from scratch
More details: - https://github.com/shortlink-org/shortlink/tree/main/ops/argocd - https://github.com/shortlink-org/shortlink/tree/main/ops/gitlab
-
Can you get a job with no professional CI/CD experience but IAC,scripting,AWS,Kubernetes and ansible experience ?
As written above, CI/CD is not the hardest part; there are many ready-made tutorials for any combination of programming languages, engineering cultures, and tools deployment (Gitlab, Github, etc.). On Github, there are many ready-made examples of CI/CD. You can study the solutions and make your own based on them—for example, set up a CI/CD deploy/update a WordPress blog. I'm spoiling my pet project to maintain my skills, for example - https://github.com/shortlink-org/shortlink/tree/main/ops It includes: - gitlab CI/CD: quite a complex pipeline with tests, builds, and deployment - GitHub action: basic linking - Argo CD: gitops, deployment of different applications to k8s
-
Typical project structure
I think that for real projects it is better to separate argocd and our applications. For my pet project I use one mono repository: - `ops/Helm` - description of all helm-charts - `ops/argocd` - description of argocd applications that refer to helm charts So when you update a helm chart it is immediately applied to the k8s cluster. You can see my example - https://github.com/shortlink-org/shortlink
-
[Go] - Saga pattern
Github: https://github.com/shortlink-org/shortlink/tree/main/pkg/saga
argocd-autopilot
- Setup Argocd-Autopilot from scratch
-
Is there a better way?
# get the nodes in the cluster data "proxmox_virtual_environment_nodes" "proxmox_nodes" {} # VM Definition resource "proxmox_virtual_environment_vm" "example" { count = var.vm_count name = count.index + 1 <= var.vm_masters ? "${var.vm_name}-master-${format("%02d", count.index + 1)}" : "${var.vm_name}-worker-${format("%02d", count.index - (var.vm_masters - 1))}" node_name = data.proxmox_virtual_environment_nodes.proxmox_nodes.names[count.index % length(data.proxmox_virtual_environment_nodes.proxmox_nodes.names)] vm_id = count.index + 1 <= var.vm_masters ? var.vm_proxmox_id + count.index : var.vm_proxmox_id + count.index + (var.vm_proxmox_id_offset - var.vm_masters) tags = sort(concat(var.vm_proxmox_tags, [count.index + 1 <= var.vm_masters ? "master" : "worker"] )) agent { enabled = true trim = true } cpu { sockets = var.vm_sockets cores = var.vm_cores } memory { dedicated = count.index + 1 <= var.vm_masters ? var.vm_mem_master : var.vm_mem_worker } disk { interface = "scsi0" datastore_id = var.clone_target_local ? var.clone_target_datastore_local : var.clone_target_datastore_nfs ssd = true size = count.index + 1 <= var.vm_masters ? var.vm_disk_size_master : var.vm_disk_size_worker iothread = true discard = "on" } network_device { model = "virtio" mac_address = count.index + 1 <= var.vm_masters ? "${var.net_mac_address_base}AA:${format("%02d", count.index)}" : "${var.net_mac_address_base}BB:${format("%02d", count.index - var.vm_masters)}" # vlan_id = var.net_vlan_id # Not needed since using dedicated interface bridge = var.net_bridge } serial_device {} # clone information clone { vm_id = var.clone_target_local ? var.clone_vm_id + (count.index % var.vm_masters) : var.clone_vm_id datastore_id = var.clone_target_local ? var.clone_target_datastore_local : var.clone_target_datastore_nfs node_name = var.clone_target_local ? data.proxmox_virtual_environment_nodes.proxmox_nodes.names[count.index % length(data.proxmox_virtual_environment_nodes.proxmox_nodes.names)]: data.proxmox_virtual_environment_nodes.proxmox_nodes.names[0] } # had to add a wait for agent to come alive provisioner "remote-exec" { inline = [ "sudo cloud-init status --wait", "sudo systemctl start qemu-guest-agent", ] connection { type = "ssh" agent = false port = 22 host = element(element(self.ipv4_addresses, index(self.network_interface_names, "eth0")), 0) private_key = file(var.public_key_path) user = var.vm_username } } } # Create file for ansible inventory resource "local_file" "k3s_file" { content = templatefile( "${path.module}/templates/inventory_ansible.tftpl", { ansible_masters = "${join("\n", [for vm in slice(proxmox_virtual_environment_vm.example, 0, var.vm_masters) : join("", [vm.ipv4_addresses[1][0] ])])}" ansible_nodes = "${join("\n", [for vm in slice(proxmox_virtual_environment_vm.example, var.vm_masters , var.vm_count) : join("", [vm.ipv4_addresses[1][0] ])])}" } ) filename = "${path.module}/../ansible-k3s/inventory/k3s-cluster/hosts.ini" } #connecting to the Ansible control node and call ansible playbook to build the k3s cluster resource "null_resource" "call-ansible" { provisioner "local-exec" { command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ${path.module}/../ansible-k3s/site.yml -i ${path.module}/../ansible-k3s/inventory/k3s-cluster/hosts.ini" } depends_on = [ local_file.k3s_file ] } #Copy the kubectl file locally so we can issue commands against the cluster resource "null_resource" "copy-kubeconfig" { provisioner "local-exec" { command = "scp -o 'StrictHostKeyChecking no' seb@${proxmox_virtual_environment_vm.example[0].ipv4_addresses[1][0]}:~/.kube/config ~/.kube/config " } depends_on = [ null_resource.call-ansible ] } #bootstrap the cluster with argocd-autopilot resource "null_resource" "argocd-autopilot" { provisioner "local-exec" { command = ( var.first_install ? "argocd-autopilot repo bootstrap --repo ${var.github_repo} -t ${var.github_token} --app https://github.com/argoproj-labs/argocd-autopilot/manifests/ha" : "argocd-autopilot repo bootstrap --recover --app ${var.github_repo}.git/bootstrap/argo-cd" ) } depends_on = [ null_resource.copy-kubeconfig ] }
- Setting up ArgoCD from scratch
-
Declarative GitOps for...my ArgoCD itself?
I use Argo CD Autopilot which bootstraps Argo CD in a self-managing structure. If nothing else, copy the repo structure https://github.com/argoproj-labs/argocd-autopilot
-
How to Install and Upgrade Argo CD
We use the same approach internally and we fully open-sourced our solution at https://argocd-autopilot.readthedocs.io/en/stable/
- Argocd kustomize repository structure
-
Argo CD for Beginners 🐙
I recommend utilising Autopilot a companion project that not only installs Argo CD but also commits all configurations to git so Argo CD can maintain itself using GitOps.
-
ArgoCD installation
Check https://argocd-autopilot.readthedocs.io/en/stable/ It is an installer that does exactly that. It installs ArgoCD, sets it up to manage itself and offers a suggested bootstrap for your applications
-
How to set up a repo of repos for argo gitops?
Checkout the ArgoCD autopilot if you're using kustomize rather than helm
- Suggestion for Gitlab pipelines with ArgoCD
What are some alternatives?
arlon - A kubernetes cluster lifecycle management and configuration tool
argocd-example-apps - Example Apps to Demonstrate Argo CD
local-gitops - An automated local cluster setup w/ tls, monitoring, ingress and DNS configuration.
argo-cd - Declarative Continuous Deployment for Kubernetes
gochk - Static Dependency Analysis Tool for Go Files
argocd-image-updater - Automatic container image update for Argo CD
helm-service-chart
Helm-Chart-Boilerplates - Example implementations of the universal helm charts
pgo - practices in go
website - 🌐 Source code for OpenGitOps website
redpanda-examples - A collection of examples to demonstrate how to interact with Redpanda from various clients and languages.
HomeBrew - 🍺 The missing package manager for macOS (or Linux)