truetool
DISCONTINUED
k3s
Our great sponsors
truetool | k3s | |
---|---|---|
11 | 250 | |
148 | 22,546 | |
- | 1.9% | |
9.0 | 9.1 | |
4 days ago | 6 days ago | |
Shell | Go | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
truetool
-
What happened to truetool?
https://github.com/truecharts/truetool this is gone
-
[SCALE] How I run `helm` and `kubectl` from the command line
cd /mnt/pool/scripts git clone https://github.com/truecharts/truetool.git cd truetool chmod +x truetool.sh # I also use this: alias turetool="/mnt/pool/scripts/truetool.sh "
k3s
-
Can any Hetzner user, please explain there workflow on Hetzner?
I use Hetzner, Contabo, Time4VPS and other platforms in pretty much the same way (as IaaS VPS providers on top of which I run software, as opposed to SaaS/PaaS), but here's a quick glance at how I do things.
> deploy from source repo? Terraform?
Personally, I use Gitea for my repos and Drone CI for CI/CD.
Gitea: https://gitea.io/en-us/
Drone CI: https://www.drone.io/
Some might prefer Woodpecker due to licensing: https://woodpecker-ci.org/ but honestly most solutions out there are okay, even Jenkins.
Then I have some sort of a container cluster on the servers, so I can easily deploy things: I still like Docker Swarm (projects like CapRover might be nice to look at as well), though many might enjoy the likes of K3s or K0s more (lightweight Kubernetes clusters).
Docker Swarm: https://docs.docker.com/engine/swarm/ (uses the Compose spec for manifests)
K3s: https://k3s.io/
K0s: https://k0sproject.io/ though MicroK8s and others are also okay.
I also like having something like Portainer to have a GUI to manage the clusters: https://www.portainer.io/ for Kubernetes Rancher might offer more features, but will have a higher footprint
It even supports webhooks, so I can do a POST request at the end of a CI run and the cluster will automatically pull and launch the latest tagged version of my apps: https://docs.portainer.io/user/docker/services/webhooks
> keep software up to date? ex: Postgres, OS
I build my own base container images and rebuild them (with recent package versions) on a regular basis, which is automatically scheduled: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...
Drone CI makes this easy to have happen in the background, as long as I don't update across major versions, or Maven decides to release a new version and remove their old version .tar.gz archives from the downloads site for some reason, breaking my builds and making me update the URL: https://docs.drone.io/cron/
Some images like databases etc. I just proxy to my Nexus instance, version upgrades are relatively painless most of the time, at least as long as I've set up the persistent data directories correctly.
> do load balancing? built-in load balancer?
This is a bit more tricky. I use Apache2 with mod_md to get Let's Encrypt certificates and Docker Swarm networking for directing the incoming traffic across the services: https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-...
Some might prefer Caddy, which is another great web server with automatic HTTPS: https://caddyserver.com/ but the Apache modules do pretty much everything I need and the performance has never actually been too bad for my needs. Up until now, applications themselves have always been the bottleneck, actually working on a blog post about comparing some web servers in real world circumstances.
However, making things a bit more failure resilient might involve just paying Hetzner (in this case) to give you a load balancer: https://www.hetzner.com/cloud/load-balancer which will make everything less painless once you need to scale.
Why? Because doing round robin DNS with the ACME certificate directory accessible and synchronized across multiple servers is a nuisance, although servers like Caddy attempt to get this working: https://caddyserver.com/docs/automatic-https#storage You could also get DNS-01 challenges working, but that needs even more work and integration with setting up TXT records. Even if you have multiple servers for resiliency, not all clients would try all of the IP addresses if one of the servers is down, although browsers should: https://webmasters.stackexchange.com/a/12704
So if you care about HTTPS certificates and want to do it yourself with multiple servers having the same hostname, you'll either need to get DNS-01 working, do some messing around with shared directories (which may or may not actually work), or will just need to get a regular commercial cert that you'd manually propagate to all of the web servers.
From there on out it should be a regular reverse proxy setup, in my case Docker Swarm takes care of the service discovery (hostnames that I can access).
> handle scaling? Terraform?
None, I manually provision how many nodes I need, mostly because I'm too broke to hand over my wallet to automation.
They have an API that you or someone else could probably hook up: https://docs.hetzner.cloud/
> automate backups? ex: databases, storage. Do you use provided backups and snapshots?
I use bind mounts for all of my containers for persistent storage, so the data is accessible on the host directly.
Then I use something like BackupPC to connect to those servers (SSH/rsync) and pull data to my own backup node, which then compresses and deduplicates the data: https://backuppc.github.io/backuppc/
It was a pain to setup, but it works really well and has saved my hide dozens of times. Some might enjoy Bacula more: https://www.bacula.org/
> maintain security? built-in firewall and DDoS protection?
I personally use Apache2 with ModSecurity and the OWASP ruleset, to act as a lightweight WAF: https://owasp.org/www-project-modsecurity-core-rule-set/
You might want to just cave in and go with Cloudflare for the most part, though: https://www.cloudflare.com/waf/
- How’s everyone running k8s on their homelab’s
-
K8s cluster with OCI free-tier and Raspberry Pi4 (part 3)
K3s can work in multiple ways (here), but for our tutorial we picked High Availability with Embedded DB architecture. This one runs etcd instead of the default sqlite3 and so it's important to have an odd number of server nodes (from official documentation: "An etcd cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. For a cluster with n members, quorum is (n/2)+1."). Initially this cluster was planned with 3 server nodes, 2 from OCI and 1 from RPi4. But after reading issues 1 and 2 on Github, there are problems with etcd being on server nodes on different networks. So this cluster will have 1 server node (this is how k3s names their master nodes): from OCI and 7 agent nodes (this is how k3s names their worker nodes): 3 from OCI and 4 from RPi4. First we need to free some ports, so the OCI cluster can communicate with the RPi cluster. Go to VCN > Security List. You need to click on Add Ingress Rule. While I could only open the needed ports for k3s networking (listed here), I decided to open all OCI ports toward my public IP only, as there is no risk involved here. So in IP Protocol select All Protocols. Now you can test if everything if it worked by ssh to any RPi4 and try to ping any OCI machine or ssh to it or try another port.
-
Docker + portainer vs k8. EILI5
It is worth noting Kubernetes is not an operating-system. If installing a Kubernetes cluster on your own you will need to install it on nodes running a Linux distro (for example Ubuntu Server). Then you can set up the Kubernetes cluster on your nodes using a tool like Kubeadm. For homelab use and edge deployments, installing K3s is a popular choice (this is a lightweight Kubernetes distribution). There are videos on YouTube you can watch on installing a K3s cluster on Pis or installing a K3s cluster on Proxmox VE (inside VMs).
-
How much can you get out of a $4 VPS?
And those daemons use constantly 25-30% CPU.https://github.com/k3s-io/k3s/issues/294
-
Multi-Arch Docker Containers
Then along came Scaleway with their very cheap ARM clouds and K3S with a lightweight Kubernetes that was perfect for Raspberry Pis. Now you can have very cost-effective and (with enough nodes) high-performing clusters running on machines lying around your office. These are perfect for development and staging clusters to test out your applications.
-
Ask HN: What is the best source to learn Docker in 2023?
I'd say that going from Docker Compose to Docker Swarm is the first logical step, because it's included in a Docker install and also uses the same Compose format (with more parameters, such as deployment constraints, like which node hostname or tag you want a certain container to be scheduled on): https://docs.docker.com/compose/compose-file/compose-file-v3... That said, you won't see lots of Docker Swarm professionally anymore - it's just the way the job market is, despite it being completely sufficient for many smaller projects out there, I'm running it in prod successfully so far and it's great.
Another reasonably lightweight alternative would be Hashicorp Nomad, because it's free, simple to deploy and their HCL format isn't too bad either, as long as you keep things simple, in addition to them supporting more than just container workloads: https://www.hashicorp.com/products/nomad That said, if you don't buy into HashiStack too much, then there won't be too much benefit from learning HCL and translating the contents of various example docker-compose.yml files that you see in a variety of repos out there, although their other tools are nice - for example, Consul (a service mesh). This is a nice but also a bit niche option.
Lastly, there is Kubernetes. It's complicated, even more so when you get into solutions like Istio, typically eats up lots of resources, can be difficult to manage and debug, but does pretty much anything that you might need, as long as you have either enough people to administer it, or a wallet that's thick enough for you to pay one of the cloud vendors to do it for you. Personally, I'd look into the lightweight clusters at first, like k0s, MicroK8s, or perhaps the K3s project in particular: https://k3s.io/
I'd also suggest that if you get this far, don't be afraid to look into options for dashboards and web based UIs to make exploring things easier:
- for Docker Swarm and Kubernetes there is Portainer: https://www.portainer.io/
- K3s to Skill Up?
-
Local Kubernetes Playground Made Easy
Here is the best part! K3d is a wrapper around k3s and can set up your entire cluster using Docker in no time. It should be noted that this is not intended for production, but it is intended for tinkerers, learning, and exam prep. Here's an example command and then we will break it down:
-
Installing A Local Kubernetes
k3s
What are some alternatives?
k0s - k0s - The Zero Friction Kubernetes
Nomad - Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
kubespray - Deploy a Production Ready Kubernetes Cluster
microk8s - MicroK8s is a small, fast, single-package Kubernetes for developers, IoT and edge.
Docker Compose - Define and run multi-container applications with Docker
kops - Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
k3d - Little helper to run CNCF's k3s in Docker
Portainer - Making Docker and Kubernetes management easy.
k9s - 🐶 Kubernetes CLI To Manage Your Clusters In Style!
nerdctl - contaiNERD CTL - Docker-compatible CLI for containerd, with support for Compose, Rootless, eStargz, OCIcrypt, IPFS, ...
kind - Kubernetes IN Docker - local clusters for testing Kubernetes
kubevirt - Kubernetes Virtualization API and runtime in order to define and manage virtual machines.