hetzner-k3s
talos

hetzner-k3s | talos | |
---|---|---|
47 | 57 | |
2,158 | 7,453 | |
8.8% | 4.3% | |
9.3 | 9.8 | |
5 days ago | 4 days ago | |
Crystal | Go | |
MIT License | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
hetzner-k3s
-
Hetzner-k3s v2.2.0 has been released
Check it out at [https://github.com/vitobotta/hetzner-k3s](https://github.com/vitobotta/hetzner-k3s) - it's the easiest and fastest way to set up Kubernetes clusters in Hetzner Cloud!
-
AWS and Azure Are at Least 4x–10x More Expensive Than Hetzner
Also highly recommended https://github.com/vitobotta/hetzner-k3s
-
New record: I created a 300-node Kubernetes cluster in 11 minutes
This is with the new version not yet released of my tool https://github.com/vitobotta/hetzner-k3s.
It uses k3s as Kubernetes flavor and Hetzner Cloud as provider. For this test I used extremely high concurrency so the tool hung twice of the process because I was hitting the Hetzner API too hard, so I had to interrupt it again and continue.
Excluding the time it paused/hubg due to the API, I calculated around 11 minutes total for the cluster creation. This includes:
- creating all the resources (cloud instances, firewall, load balancer for the Kubernetes API)
-
Best way to deploy K8s to single VPS for dev environment
Try my project Hetzner-K3s, it’s by far the easiest and quickest way to create and manage clusters in Hetzner cloud. https://github.com/vitobotta/hetzner-k3s
-
(For Kubernetes users mainly) I need your help/advice with a business idea
Hi all, if you already use Kubernetes in any capacity or are you interested in it, would you mind spending a few minutes voting in a quick poll and hopefully answering a few questions? I would appreciate your help a ton because it would help me make the right decision and hopefully avoid a costly waste of time.
Everything is in a Github discussion at https://github.com/vitobotta/hetzner-k3s/discussions/296. A huge thank you in advance if you can help with this!
-
V1.1.5 of Hetzner-k3s (my Kubernetes installer for Hetzner Cloud) is out
The new release introduces more customisation options for cluster/service CIDRs, cluster DNS, updated manifests for CSI/CCM/autoscaler, and a couple of improvement for creating large clusters. Check it out at https://github.com/vitobotta/hetzner-k3s
If you are already familiar with this tool, I'd love to know how it's worked for you so far. :)
-
K3s – Lightweight Kubernetes
https://github.com/vitobotta/hetzner-k3s
Kubernetes on Hetzner Cloud the easiest way
-
K3s on hetzner virtual hosts
There’s this cool GitHub project that helps automate a lot of the process for K3s on Hetzner: https://github.com/vitobotta/hetzner-k3s
-
Savings cost for self managed K8s?
If you are willing to leave AWS in order to save a lot of money, you have an option in https://github.com/vitobotta/hetzner-k3s
- hetzner-k3s v.1.1.2 is out with support for the new , powerful but cheap ARM instances! 🎉
talos
-
Ask HN: Kubernetes bare metal learning material
Might not be the answer you were looking for but hear me out: the biggest impact on my Kubernetes knowledge has been starting a homelab on Talos Linux.
I've used this as a sandbox/playspace/proving ground for Kubernetes concepts to satisfy my own curiosities. The benefit of this space is that you can make mistakes without affecting any real data, and you can blow away your entire config and start from scratch if you need to. I have already seen benefits to this hobby in my career.
My entrypoint was the Talos getting started guide: https://www.talos.dev/
And following the community at https://www.reddit.com/r/selfhosted/
-
When was the famous "sudo warning" introduced? Under what background? By whom?
I think this is underrated as a design flaw for how Linux tends to be used in 2024. At its most benign it's an anachronism and potential source of complexity, as it's worst it's a major source of security flaws and unintended behavior (eg linux multitenancy was designed for two people in the same lab sharing a server, not for running completely untrusted workloads at huge scale).
I haven't had a chance to try it out but this is why I think Talos linux (https://www.talos.dev/) is a step in the right direction for Linux as it is used for cloud/servers. Though personally I think multitenancy esp. regarding containerized applications/cgroups is a bigger problem and I don't know if they're addressing that.
-
Kubernetes PODs with global IPv6
How to create a VM with the Talos image is beyond the scope of this article. Please refer to the official documentation for guidance. After bootstrapping the control plane, the next step is to deploy the Talos CCM along with a CNI plugin.
-
Kubernetes homelab - Learning by doing, Part 2: Installation
Maybe in the future I will try others systems, like Talos which is designed for Kubernetes - secure, immutable, and minimal.
-
Ask HN: Who is using immutable OSes?
I've used Talos Linux[1] on a production infrastructure. To keep a Maintainability. (Because there are no person to maintain a infrastructure 24/7)
All the configurations are made and came from YAML. So I can manage and share on Git. And able to spin a new node (or cluster) ASAP.
For my own, I'm using a NixOS as a daily driver. It's pretty great to spin up machine and environment ASAP. (I don't know why I keep saying `ASAP`, but time is a money.)
However the downside is require a strong knowledge of Nix Language. Sometime the installer crashses.
Without that, it's pretty great.
---
[1]: https://www.talos.dev/
-
Reclaim the Stack
Log aggregation: https://reclaim-the-stack.com/docs/platform-components/log-a...
Observability is on the whole better than what we had at Heroku since we now have direct access to realtime resource consumption of all infrastructure parts. We also have infinite log retention which would have been prohibitively expensive using Heroku logging addons (though we cap retention at 12 months for GDPR reasons).
> Who/What is going to be doing that on this new platform and how much does that cost?
Me and my colleague who created the tool together manage infrastructure / OS upgrades and look into issues etc. So far we've been in production 1.5 years on this platform. On average we spent perhaps 3 days per month doing platform related work (mostly software upgrades). The rest we spend on full stack application development.
The hypothesis for migrating to Kubernetes was that the available database operators would be robust enough to automate all common high availability / backup / disaster recovery issues. This has proven to be true, apart from the Redis operator which has been our only pain point from a software point of view so far. We are currently rolling out a replacement approach using our own Kubernetes templates instead of relying on an operator at all for Redis.
> Now you need to maintain k8s, postgresql, elasticsearch, redis, secret managements, OSs, storage... These are complex systems that require people understanding how they internally work
Thanks to Talos Linux (https://www.talos.dev/), maintaining K8s has been a non issue.
-
My IRC client runs on Kubernetes
TIL about Talos (https://github.com/siderolabs/talos, via your github/onedr0p/cluster-template link). I'd been previously running k3s cluster on a mixture of x86 and ARM (RPi) nodes, and frankly it was a bit of a PiTA to maintain.
-
Tailscale Kubernetes Operator
About a month ago I setup a Kubernetes cluster using Talos to handle my container load at home.
-
Talos: Secure, immutable, and minimal Linux OS for running Kubernetes
I considered deploying Talos a few weeks ago, and I ran into this:
https://github.com/siderolabs/talos/issues/8367
Unless I’ve missed something, this isn’t a big deal in an AWS-style cloud where extra storage volumes (EBS, etc) have essentially no incremental cost, and maybe it’s okay on bare metal if the bare metal is explicitly designed with a completely separate boot disk (this includes Raspberry Pi using SD for boot and some other device for actual storage), but it seemed like a mostly showstopping issue for an average server that was specced with the intent to boot off a partition.
I suppose one could fudge it with NVMe namespaces if the hardware cooperates. (I’ve never personally tried setting up a nontrivial namespace setup.)
-
Tau: Open-source PaaS – A self-hosted Vercel / Netlify / Cloudflare alternative
I assume https://www.talos.dev/
Basically a small OS that will prop itself up and allow you to create/adopt into a Kubernetes cluster. Seems to work well from my experience and pretty easy to get set up on.
What are some alternatives?
k3d - Little helper to run CNCF's k3s in Docker
rke2
kubespray - Deploy a Production Ready Kubernetes Cluster
terraform-hcloud-kube-hetzner - Optimized and Maintenance-free Kubernetes on Hetzner Cloud in one command!
Flatcar - Flatcar project repository for issue tracking, project documentation, etc.
ansible-role-k3s - Ansible role for deploying k3s cluster
microk8s - MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
k-andy - Low cost Kubernetes stack for startups, prototypes, and playgrounds on Hetzner Cloud.
k3sup - bootstrap K3s over SSH in < 60s 🚀
kairos - The immutable Linux meta-distribution for edge Kubernetes.
