k3s-ansible
velero
k3s-ansible | velero | |
---|---|---|
17 | 42 | |
2,071 | 8,235 | |
2.7% | 1.0% | |
8.8 | 9.7 | |
7 days ago | 7 days ago | |
Jinja | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
k3s-ansible
-
How can I route some pods through a Wireguard pod?
I deployed k3s to a test node using Techno Tim's k3s-ansible playbook.
-
MetalLB Routing on Hetzner Bare Metal
Remind myself about how Ansible works (I've forked this: https://github.com/techno-tim/k3s-ansible/tree/master and added a role to automatically set up my Hetzner server and install Core OS, as well as staring the cluster with Flannel Wireguard Native, and a few other minor changes).
-
Fastest way to set up an k8s environment ?
I think this one is updated https://docs.technotim.live/posts/k3s-etcd-ansible/
-
(Longhorn/K3s) Failed cluster, made new cluster, are PVs salvageable?
I recently broke my cluster somehow (see this thread) so I decided to start fresh because I can't get K3s up and running again. I now have 5 nodes (3 master, 2 worker) with etcd configured using the K3s-ansible guide found here. Is it possible to recover the PVs from my failed cluster? I still have SSH access to each of the machines that participated in the cluster. It would save me a lot of rebuilding time if I could extract them (even from an older backup, if Longhorn stores them in an accessible format) and apply to the new cluster.
-
How does one cascade reverse proxies together?
Like u/darkstar_01 mentioned, I'd start with k3s since it has a lot of these things built in and is really lightweight. To further that suggestion I'd recommend using u/Techno-Tim k3s-ansible playbook, it's dark magic. https://github.com/techno-tim/k3s-ansible
-
postgresql cluster , two nodes, docker swarm
To be honest, I'd just switch to Kubernetes for something like this. Technotim has some pretty easy to digest guides on how to get a basic cluster set up
-
Networking in K3S HA Cluster on Proxmox
take a look at this https://github.com/techno-tim/k3s-ansible https://www.youtube.com/watch?v=CbkEWcUZ7zM&ab\_channel=TechnoTim
-
Kubernetes (k3s) Tutoring/Instructor
I'm having a hard time understanding how to setup the network/cluster for HA. I've basically been following along with this guide: https://docs.technotim.live/posts/k3s-etcd-ansible/ which uses MetalLB + kube-vip. I have the cluster running and have the MetalLB IP range set for a block of internal LAN IPs / Layer 2 (unsure if this is correct for what I'm looking to do). All of this seems to be working internally. My confusion is how to get WAN traffic in.
-
LXC Containers... but why?
I’m Nomsplease on GitHub, and I’m currently running the latest Proxmox 7.x with the opt-in kernel 6.X. https://github.com/techno-tim/k3s-ansible
-
Finally finished my homelab diagram!
Proxmox is host to a bunch of VMs, including a K3S cluster that is setup though an Ansible playbook. There are 3 Masters and 4 workers. I followed TechnoTim’s guide here to get this cracking and honestly, I’ve only scratched the surface on Kubernetes. I setup a bash alias on the first IP in the K3S stack to run the Ansible playbook with one simple command, so its simple to spin up again, should I shutoff this server. I then setup Rancher to maintain and utilize the Kubernetes Cluster, with a Traefik2 ingress, MetalLB, Helm, and Longhorn for distributed storage. Links here for tutorials by TechnoTim – Longhorn, Traefik-K3S-ingress with Cert-manager, and Rancher setup. The Proxmox server is also home to two separate PBX solutions, they’re installed and they have access to my SIP trunk provider (voip.ms, here’s my referral link if anyone’s interested.) I’ve added 15 bucks to the account and have it as a work line should I ever get my Technical Consulting business off the ground. Right now the PBXs can be spun up but the IP phones are sitting in a closet. It’s a cool project to get going though even if I don’t need a landline, let alone a full PBX. From there I have a bunch of small Ubuntu VMs that I have a created though template’s with cloud-init drives to make it a sinch to spin up another VM (Cloud-init tutorial) I just started to get into Terraform (IoC – infrastructure as code) to spin up VMs in much the same way you would with Ansible (project here thru The Digital Life, yt channel). LibreNMS is another thing that I just spun up the other day. No real tutorial to link because SNMP is dead simple. I’m sure I could dockerize some of these projects, rather than spinning up a whole new Ubuntu VM, but sometimes its nice to just have a clean start and then combine Compose files into stacks though I’m sure some of the VMs can be setup to run more than one service per VM.
velero
-
What is the proper, kubernetes native way of working with multiple clusters for DR, HA?
Openshift last I looked used Velero under the covers for the functionality, which works fine in standard kubernetes. Most if not all that Openshift does is Open source.
-
Is there a way to clone an existing Azure Kubernetes Cluster?
Valero
-
What are the best practices for backing up k8S related ressources in RKE2 clusters running on VSphere ?
velero is also a popular solution to for k8s backup that is 3rd party you might check out.
-
Ask r/kubernetes: What are you working on this week?
Logical backups using pre and post hooks thanks to this suggestion https://github.com/vmware-tanzu/velero/issues/2763 working way better than kanister blueprints.
-
Tool for dumping manifests from your Kubernetes clusters
While not discounting OP or the work in this repo (seems like a fun k8s/go project), folks might check out Velero for this purpose if they're looking to rely on this kind of export in prod: https://github.com/vmware-tanzu/velero
- Kubernetes Backup & Restore - Recommended options?
-
Hyper-v backup for Kubernetes cluster
Hyper-V itself does not directly support backing up container-based platforms like Kubernetes clusters. To back up a Kubernetes cluster, you would typically use tools that interact with the Kubernetes API to capture the necessary data and metadata for backup purposes. Some of those tools are Velero https://velero.io/ (formerly Heptio Ark), Kasten K10, and Stash.
-
Kubernetes postgres backups
For Kubernetes-land, https://velero.io/ is awesome - but I haven't used it for online-database backups yet. If you're exploring, I'd checkout Velero - if you just need something to work reliably, I'd checkout Percona.
-
EKS Etcd Backup
If you're looking for a backup solution for managed kubernetes, check out Velero. It is great for non-managed kube as well (but you've got other options like etcd backups)
-
(Longhorn/K3s) Failed cluster, made new cluster, are PVs salvageable?
You can also leverage https://velero.io/ to backup both cluster state and pvc state to s3
What are some alternatives?
vagrant-k3s-HA-cluster - This repository contains the Vagrantfile and scripts to easily configure a Highly Available Kubernetes (K3s) cluster.
rook - Storage Orchestration for Kubernetes
ansible-role-k3s - Ansible role for installing k3s as either a standalone server or HA cluster.
k8s-object-dumper - Kubernetes object dumper for use as a pre backup command in K8up.
kadalu - A lightweight Persistent storage solution for Kubernetes / OpenShift / Nomad using GlusterFS in background. More information at https://kadalu.tech
prometheus - The Prometheus monitoring system and time series database.
etcd-cloud-operator - Deploying and managing production-grade etcd clusters on cloud providers: failure recovery, disaster recovery, backups and resizing.
istio - Connect, secure, control, and observe services.
agorakube - Agorakube is a Certified Kubernetes Distribution built on top of CNCF ecosystem that provides an enterprise grade solution following best practices to manage a conformant Kubernetes cluster for on-premise and public cloud providers.
Scaleway-cli - Command Line Interface for Scaleway
k3s-aws-terraform-cluster - Deploy an high available K3s cluster on Amazon AWS
Grafana - The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.