ansible-role-postgresql-ha
k3s-ansible
Our great sponsors
ansible-role-postgresql-ha | k3s-ansible | |
---|---|---|
1 | 17 | |
17 | 1,550 | |
- | 8.0% | |
10.0 | 0.0 | |
10 months ago | 6 days ago | |
Jinja | Jinja | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ansible-role-postgresql-ha
-
Any self hostable postgres, clustering, replication and fail over system?
For example https://github.com/fidanf/ansible-role-postgresql-ha
k3s-ansible
-
Fastest way to set up an k8s environment ?
I think this one is updated https://docs.technotim.live/posts/k3s-etcd-ansible/
-
(Longhorn/K3s) Failed cluster, made new cluster, are PVs salvageable?
I recently broke my cluster somehow (see this thread) so I decided to start fresh because I can't get K3s up and running again. I now have 5 nodes (3 master, 2 worker) with etcd configured using the K3s-ansible guide found here. Is it possible to recover the PVs from my failed cluster? I still have SSH access to each of the machines that participated in the cluster. It would save me a lot of rebuilding time if I could extract them (even from an older backup, if Longhorn stores them in an accessible format) and apply to the new cluster.
-
Kubernetes (k3s) Tutoring/Instructor
I'm having a hard time understanding how to setup the network/cluster for HA. I've basically been following along with this guide: https://docs.technotim.live/posts/k3s-etcd-ansible/ which uses MetalLB + kube-vip. I have the cluster running and have the MetalLB IP range set for a block of internal LAN IPs / Layer 2 (unsure if this is correct for what I'm looking to do). All of this seems to be working internally. My confusion is how to get WAN traffic in.
-
LXC Containers... but why?
I’m Nomsplease on GitHub, and I’m currently running the latest Proxmox 7.x with the opt-in kernel 6.X. https://github.com/techno-tim/k3s-ansible
Next, in my efforts to learn Kubernetes, I wanted to start up a cluster based on this youtube using Ansible. Works great when my guests are full VM's, but fail when using LXC containers. I don't remember the specific error right now, but there as recently another thread that caught my eye on a similar topic.
- How to install Kubernetes on Raspberry PI
- Any good howto set up your own full cluster?
-
Using Terraform to Deploy Templates to VMs in Proxmox
For the Ansible part, have a look at techno-tim/k3s-ansible for a basic k3s (not the full thing) with MetalLB ansible setup. There's also the original one (based on Traefik): k3s-io/k3s-ansible. Modfiy hosts and vars accordingly to your needs. Run your terraform plan, then run the ansible playbook once you've modified the vars accordingly. They're good starters, but for a full blown k8s ansible recipe, I'll be able to help you when I come back from holidays :) (16th of August).
What are some alternatives?
vagrant-k3s-HA-cluster - This repository contains the Vagrantfile and scripts to easily configure a Highly Available Kubernetes (K3s) cluster.
ansible-role-k3s - Ansible role for installing k3s as either a standalone server or HA cluster.
kadalu - A lightweight Persistent storage solution for Kubernetes / OpenShift / Nomad using GlusterFS in background. More information at https://kadalu.tech
agorakube - Agorakube is a Certified Kubernetes Distribution built on top of CNCF ecosystem that provides an enterprise grade solution following best practices to manage a conformant Kubernetes cluster for on-premise and public cloud providers.
etcd-cloud-operator - Deploying and managing production-grade etcd clusters on cloud providers: failure recovery, disaster recovery, backups and resizing.
k3s-aws-terraform-cluster - Deploy an high available K3s cluster on Amazon AWS
k3s-ansible
k3s-gitops - My home Kubernetes (k3s) cluster managed by GitOps (Flux)
vagrant-k8s - Creating a multi-node Kubernetes cluster on local machine using VirtualBox and Vagrant.
kubernetes-the-hard-way - Bootstrap Kubernetes the hard way on Google Cloud Platform. No scripts.
kURL - Production-grade, airgapped Kubernetes installer combining upstream k8s with overlays and popular components
kubespray - Deploy a Production Ready Kubernetes Cluster