talos
kubectl-node-shell
talos | kubectl-node-shell | |
---|---|---|
56 | 4 | |
6,963 | 1,533 | |
3.3% | - | |
9.8 | 3.9 | |
3 days ago | 10 days ago | |
Go | Shell | |
Mozilla Public License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
talos
-
When was the famous "sudo warning" introduced? Under what background? By whom?
I think this is underrated as a design flaw for how Linux tends to be used in 2024. At its most benign it's an anachronism and potential source of complexity, as it's worst it's a major source of security flaws and unintended behavior (eg linux multitenancy was designed for two people in the same lab sharing a server, not for running completely untrusted workloads at huge scale).
I haven't had a chance to try it out but this is why I think Talos linux (https://www.talos.dev/) is a step in the right direction for Linux as it is used for cloud/servers. Though personally I think multitenancy esp. regarding containerized applications/cgroups is a bigger problem and I don't know if they're addressing that.
-
Kubernetes PODs with global IPv6
How to create a VM with the Talos image is beyond the scope of this article. Please refer to the official documentation for guidance. After bootstrapping the control plane, the next step is to deploy the Talos CCM along with a CNI plugin.
-
Kubernetes homelab - Learning by doing, Part 2: Installation
Maybe in the future I will try others systems, like Talos which is designed for Kubernetes - secure, immutable, and minimal.
-
Ask HN: Who is using immutable OSes?
I've used Talos Linux[1] on a production infrastructure. To keep a Maintainability. (Because there are no person to maintain a infrastructure 24/7)
All the configurations are made and came from YAML. So I can manage and share on Git. And able to spin a new node (or cluster) ASAP.
For my own, I'm using a NixOS as a daily driver. It's pretty great to spin up machine and environment ASAP. (I don't know why I keep saying `ASAP`, but time is a money.)
However the downside is require a strong knowledge of Nix Language. Sometime the installer crashses.
Without that, it's pretty great.
---
[1]: https://www.talos.dev/
-
Reclaim the Stack
Log aggregation: https://reclaim-the-stack.com/docs/platform-components/log-a...
Observability is on the whole better than what we had at Heroku since we now have direct access to realtime resource consumption of all infrastructure parts. We also have infinite log retention which would have been prohibitively expensive using Heroku logging addons (though we cap retention at 12 months for GDPR reasons).
> Who/What is going to be doing that on this new platform and how much does that cost?
Me and my colleague who created the tool together manage infrastructure / OS upgrades and look into issues etc. So far we've been in production 1.5 years on this platform. On average we spent perhaps 3 days per month doing platform related work (mostly software upgrades). The rest we spend on full stack application development.
The hypothesis for migrating to Kubernetes was that the available database operators would be robust enough to automate all common high availability / backup / disaster recovery issues. This has proven to be true, apart from the Redis operator which has been our only pain point from a software point of view so far. We are currently rolling out a replacement approach using our own Kubernetes templates instead of relying on an operator at all for Redis.
> Now you need to maintain k8s, postgresql, elasticsearch, redis, secret managements, OSs, storage... These are complex systems that require people understanding how they internally work
Thanks to Talos Linux (https://www.talos.dev/), maintaining K8s has been a non issue.
-
My IRC client runs on Kubernetes
TIL about Talos (https://github.com/siderolabs/talos, via your github/onedr0p/cluster-template link). I'd been previously running k3s cluster on a mixture of x86 and ARM (RPi) nodes, and frankly it was a bit of a PiTA to maintain.
-
Tailscale Kubernetes Operator
About a month ago I setup a Kubernetes cluster using Talos to handle my container load at home.
-
Talos: Secure, immutable, and minimal Linux OS for running Kubernetes
I considered deploying Talos a few weeks ago, and I ran into this:
https://github.com/siderolabs/talos/issues/8367
Unless I’ve missed something, this isn’t a big deal in an AWS-style cloud where extra storage volumes (EBS, etc) have essentially no incremental cost, and maybe it’s okay on bare metal if the bare metal is explicitly designed with a completely separate boot disk (this includes Raspberry Pi using SD for boot and some other device for actual storage), but it seemed like a mostly showstopping issue for an average server that was specced with the intent to boot off a partition.
I suppose one could fudge it with NVMe namespaces if the hardware cooperates. (I’ve never personally tried setting up a nontrivial namespace setup.)
-
Tau: Open-source PaaS – A self-hosted Vercel / Netlify / Cloudflare alternative
I assume https://www.talos.dev/
Basically a small OS that will prop itself up and allow you to create/adopt into a Kubernetes cluster. Seems to work well from my experience and pretty easy to get set up on.
-
Ask HN: Discuss ADHD and your use of medication
First, obligatory xkcd [0].
> This challenge/solution consumed my entire interest for that day. My dopamine hit was because I wouldn't have to do the BigBoringTask ever again.
Yep. Occasionally I have to stop and remind myself that all I'm trying to do is rename 10 files (for example), and by the time I remember the {ba,z}sh-ism for parameter substitution, I could have probably manually renamed them. I usually tell myself that it's not nearly as fun, though.
This does occasionally present detrimental facets, though. I have a homelab, and as most people with one, its primary purpose is storing and serving media files (I promise I do other things too, but let's be honest – Plex is what people care about). I run apps in K3OS, which has been dead for quite some time. The NAS is in a VM under Proxmox, and I build images with Packer + Ansible. I've been wanting to shift K3OS over to Talos [1] for some time, but I had convinced myself that it was only worthwhile if all of it was in IaC, starting from PXE. I got most of the way there, and then stopped due to work taking more of my life than I wanted. Unfortunately, around this time the NAS broke (as in a hardware failure, not a software issue), and I was refusing to bring it back until the entire homelab was up to my absurd self-imposed standards. Eventually I convinced myself this was a ridiculous punishment, replaced the dead hardware, and brought it back.
[0]: https://xkcd.com/1319/
[1]: https://www.talos.dev/
kubectl-node-shell
-
There are only 12 binaries in Talos Linux
Big fan of Talos, have used it in some homelab + cloud clusters over the years, currently powers all my self-hosting. The `talosctl` command is great, and any time you need to do node-level debugging, there's always something like node-shell [1].
[1] https://github.com/kvaps/kubectl-node-shell
-
How do we access node filesystem and utilities from a privileged Pod/container?
There is a great tool I use to access nodes in privileged mode called kubectl node-shell https://github.com/kvaps/kubectl-node-shell you just type kubectl-node_shell and that is it. It will start privileged pod for you on that node and give you full access to that node.
-
ImagePullPolicy: IfNotPresent - (image doesn’t exist in repo) - Is it possible to pull the micro service image from an EKS node and then push to repo?
If you can ssh into the nodes you can definitely docker export the image and copy it somewhere. If you can't ssh you may be able to run something like this: https://github.com/kvaps/kubectl-node-shell
-
Talos Linux
The amount and variety of machine images shipped is honestly impressive:
https://github.com/siderolabs/talos/releases/tag/v1.0.6
First time I have seen a project publish vmware-arm64.ova for ESXi arm edition.
Is it still possible to exec into a shell on a cluster node via something like https://github.com/kvaps/kubectl-node-shell ?
What are some alternatives?
k3sup - bootstrap K3s over SSH in < 60s 🚀
kubectl-build - Build dockerfiles directly in your Kubernetes cluster.
kubespray - Deploy a Production Ready Kubernetes Cluster
kubespy - pod debugging tool for kubernetes clusters with docker runtimes
microk8s - MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
konfig - konfig helps to merge, split or import kubeconfig files
rke2
kubectl-sudo - Run kubernetes commands with the security privileges of another user
Flatcar - Flatcar project repository for issue tracking, project documentation, etc.
go-containerregistry - Go library and CLIs for working with container registries
ansible-role-k3s - Ansible role for deploying k3s cluster
kubectl-plugin-ssh-jump - A kubectl plugin to access nodes or remote services using a SSH jump Pod