truetool VS k3s

Compare truetool vs k3s and see what are their differences.


A TrueCharts automatic and bulk update utility (by truecharts)


Lightweight Kubernetes (by k3s-io)
Our great sponsors
  • InfluxDB - Access the most powerful time series database as a service
  • SonarLint - Clean code begins in your IDE with SonarLint
  • SaaSHub - Software Alternatives and Reviews
truetool k3s
11 250
148 22,546
- 1.9%
9.0 9.1
4 days ago 6 days ago
Shell Go
BSD 3-clause "New" or "Revised" License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.


Posts with mentions or reviews of truetool. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-23.


Posts with mentions or reviews of k3s. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-22.
  • Can any Hetzner user, please explain there workflow on Hetzner?
    19 projects | | 22 Mar 2023
    I use Hetzner, Contabo, Time4VPS and other platforms in pretty much the same way (as IaaS VPS providers on top of which I run software, as opposed to SaaS/PaaS), but here's a quick glance at how I do things.

    > deploy from source repo? Terraform?

    Personally, I use Gitea for my repos and Drone CI for CI/CD.


    Drone CI:

    Some might prefer Woodpecker due to licensing: but honestly most solutions out there are okay, even Jenkins.

    Then I have some sort of a container cluster on the servers, so I can easily deploy things: I still like Docker Swarm (projects like CapRover might be nice to look at as well), though many might enjoy the likes of K3s or K0s more (lightweight Kubernetes clusters).

    Docker Swarm: (uses the Compose spec for manifests)


    K0s: though MicroK8s and others are also okay.

    I also like having something like Portainer to have a GUI to manage the clusters: for Kubernetes Rancher might offer more features, but will have a higher footprint

    It even supports webhooks, so I can do a POST request at the end of a CI run and the cluster will automatically pull and launch the latest tagged version of my apps:

    > keep software up to date? ex: Postgres, OS

    I build my own base container images and rebuild them (with recent package versions) on a regular basis, which is automatically scheduled:

    Drone CI makes this easy to have happen in the background, as long as I don't update across major versions, or Maven decides to release a new version and remove their old version .tar.gz archives from the downloads site for some reason, breaking my builds and making me update the URL:

    Some images like databases etc. I just proxy to my Nexus instance, version upgrades are relatively painless most of the time, at least as long as I've set up the persistent data directories correctly.

    > do load balancing? built-in load balancer?

    This is a bit more tricky. I use Apache2 with mod_md to get Let's Encrypt certificates and Docker Swarm networking for directing the incoming traffic across the services:

    Some might prefer Caddy, which is another great web server with automatic HTTPS: but the Apache modules do pretty much everything I need and the performance has never actually been too bad for my needs. Up until now, applications themselves have always been the bottleneck, actually working on a blog post about comparing some web servers in real world circumstances.

    However, making things a bit more failure resilient might involve just paying Hetzner (in this case) to give you a load balancer: which will make everything less painless once you need to scale.

    Why? Because doing round robin DNS with the ACME certificate directory accessible and synchronized across multiple servers is a nuisance, although servers like Caddy attempt to get this working: You could also get DNS-01 challenges working, but that needs even more work and integration with setting up TXT records. Even if you have multiple servers for resiliency, not all clients would try all of the IP addresses if one of the servers is down, although browsers should:

    So if you care about HTTPS certificates and want to do it yourself with multiple servers having the same hostname, you'll either need to get DNS-01 working, do some messing around with shared directories (which may or may not actually work), or will just need to get a regular commercial cert that you'd manually propagate to all of the web servers.

    From there on out it should be a regular reverse proxy setup, in my case Docker Swarm takes care of the service discovery (hostnames that I can access).

    > handle scaling? Terraform?

    None, I manually provision how many nodes I need, mostly because I'm too broke to hand over my wallet to automation.

    They have an API that you or someone else could probably hook up:

    > automate backups? ex: databases, storage. Do you use provided backups and snapshots?

    I use bind mounts for all of my containers for persistent storage, so the data is accessible on the host directly.

    Then I use something like BackupPC to connect to those servers (SSH/rsync) and pull data to my own backup node, which then compresses and deduplicates the data:

    It was a pain to setup, but it works really well and has saved my hide dozens of times. Some might enjoy Bacula more:

    > maintain security? built-in firewall and DDoS protection?

    I personally use Apache2 with ModSecurity and the OWASP ruleset, to act as a lightweight WAF:

    You might want to just cave in and go with Cloudflare for the most part, though:

  • How’s everyone running k8s on their homelab’s
    2 projects | | 9 Mar 2023
  • K8s cluster with OCI free-tier and Raspberry Pi4 (part 3)
    2 projects | | 15 Feb 2023
    K3s can work in multiple ways (here), but for our tutorial we picked High Availability with Embedded DB architecture. This one runs etcd instead of the default sqlite3 and so it's important to have an odd number of server nodes (from official documentation: "An etcd cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. For a cluster with n members, quorum is (n/2)+1."). Initially this cluster was planned with 3 server nodes, 2 from OCI and 1 from RPi4. But after reading issues 1 and 2 on Github, there are problems with etcd being on server nodes on different networks. So this cluster will have 1 server node (this is how k3s names their master nodes): from OCI and 7 agent nodes (this is how k3s names their worker nodes): 3 from OCI and 4 from RPi4. First we need to free some ports, so the OCI cluster can communicate with the RPi cluster. Go to VCN > Security List. You need to click on Add Ingress Rule. While I could only open the needed ports for k3s networking (listed here), I decided to open all OCI ports toward my public IP only, as there is no risk involved here. So in IP Protocol select All Protocols. Now you can test if everything if it worked by ssh to any RPi4 and try to ping any OCI machine or ssh to it or try another port.
  • Docker + portainer vs k8. EILI5
    3 projects | | 7 Feb 2023
    It is worth noting Kubernetes is not an operating-system. If installing a Kubernetes cluster on your own you will need to install it on nodes running a Linux distro (for example Ubuntu Server). Then you can set up the Kubernetes cluster on your nodes using a tool like Kubeadm. For homelab use and edge deployments, installing K3s is a popular choice (this is a lightweight Kubernetes distribution). There are videos on YouTube you can watch on installing a K3s cluster on Pis or installing a K3s cluster on Proxmox VE (inside VMs).
  • How much can you get out of a $4 VPS?
    12 projects | | 6 Feb 2023
    And those daemons use constantly 25-30% CPU.
  • Multi-Arch Docker Containers
    3 projects | | 29 Jan 2023
    Then along came Scaleway with their very cheap ARM clouds and K3S with a lightweight Kubernetes that was perfect for Raspberry Pis. Now you can have very cost-effective and (with enough nodes) high-performing clusters running on machines lying around your office. These are perfect for development and staging clusters to test out your applications.
  • Ask HN: What is the best source to learn Docker in 2023?
    8 projects | | 29 Jan 2023
    I'd say that going from Docker Compose to Docker Swarm is the first logical step, because it's included in a Docker install and also uses the same Compose format (with more parameters, such as deployment constraints, like which node hostname or tag you want a certain container to be scheduled on): That said, you won't see lots of Docker Swarm professionally anymore - it's just the way the job market is, despite it being completely sufficient for many smaller projects out there, I'm running it in prod successfully so far and it's great.

    Another reasonably lightweight alternative would be Hashicorp Nomad, because it's free, simple to deploy and their HCL format isn't too bad either, as long as you keep things simple, in addition to them supporting more than just container workloads: That said, if you don't buy into HashiStack too much, then there won't be too much benefit from learning HCL and translating the contents of various example docker-compose.yml files that you see in a variety of repos out there, although their other tools are nice - for example, Consul (a service mesh). This is a nice but also a bit niche option.

    Lastly, there is Kubernetes. It's complicated, even more so when you get into solutions like Istio, typically eats up lots of resources, can be difficult to manage and debug, but does pretty much anything that you might need, as long as you have either enough people to administer it, or a wallet that's thick enough for you to pay one of the cloud vendors to do it for you. Personally, I'd look into the lightweight clusters at first, like k0s, MicroK8s, or perhaps the K3s project in particular:

    I'd also suggest that if you get this far, don't be afraid to look into options for dashboards and web based UIs to make exploring things easier:

      - for Docker Swarm and Kubernetes there is Portainer:
  • K3s to Skill Up?
    2 projects | | 24 Jan 2023
  • Local Kubernetes Playground Made Easy
    3 projects | | 20 Jan 2023
    Here is the best part! K3d is a wrapper around k3s and can set up your entire cluster using Docker in no time. It should be noted that this is not intended for production, but it is intended for tinkerers, learning, and exam prep. Here's an example command and then we will break it down:
  • Installing A Local Kubernetes
    3 projects | | 15 Jan 2023

What are some alternatives?

When comparing truetool and k3s you can also consider the following projects:

k0s - k0s - The Zero Friction Kubernetes

Nomad - Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.

kubespray - Deploy a Production Ready Kubernetes Cluster

microk8s - MicroK8s is a small, fast, single-package Kubernetes for developers, IoT and edge.

Docker Compose - Define and run multi-container applications with Docker

kops - Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management

k3d - Little helper to run CNCF's k3s in Docker

Portainer - Making Docker and Kubernetes management easy.

k9s - 🐶 Kubernetes CLI To Manage Your Clusters In Style!

nerdctl - contaiNERD CTL - Docker-compatible CLI for containerd, with support for Compose, Rootless, eStargz, OCIcrypt, IPFS, ...

kind - Kubernetes IN Docker - local clusters for testing Kubernetes

kubevirt - Kubernetes Virtualization API and runtime in order to define and manage virtual machines.