docker-swarm-autoscaler
k3s
docker-swarm-autoscaler | k3s | |
---|---|---|
3 | 292 | |
70 | 26,483 | |
- | 1.2% | |
10.0 | 9.6 | |
over 4 years ago | 7 days ago | |
Ruby | Go | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
docker-swarm-autoscaler
-
Running auto-scalling docker services
If you want to have some sort of auto scaling, you will need to monitor to some extent though as this will be the signal for scaling up/down. I noticed that https://github.com/jcwimer/docker-swarm-autoscaler already includes the relevant prometheus configs required for just scaling by cpu.
-
Acorn: A lightweight PaaS for Kubernertes, from Rancher founders
Nomad, Docker Swarm and other solutions support most of these out of the box, Kubernetes is just the most popular and flexible (with which comes a lot of complexity) solution, it seems.
For example, even something as basic as Docker Swarm will see you a lot of the way through.
> How do you implement healthcheck?
Supported by Docker: https://docs.docker.com/engine/reference/builder/#healthchec...
> Does the loadbalancer know how the healthceck is implemented?
When the health checks pass in accordance with the above config, the container state will change from "starting" to "healthy" and traffic will be able to be routed to it. Until then you can have a web server or whatever show a different page/implement circuit breaking or whatever.
> How do you determine it's time to scale?
Docker Swarm doesn't have an abstraction for autoscaling, though there are a few community projects. One can feasibly even write something like that themselves in an evening: https://github.com/jcwimer/docker-swarm-autoscaler
That said, I mostly ignore this concern because I'm yet to see a workload that needs to dynamically scale in any number of private or government projects that I've worked on. Most of the time people want predictable infrastructure and being able to deal with backpressure (e.g. message queue), though that's different with startups.
> How do you implement always-on-process? service unit, initd, cron?
The service abstraction comes out of the box: https://docs.docker.com/engine/swarm/how-swarm-mode-works/se...
You might also want to decide how to best schedule it: wherever available, on a particular node (hostname/tag/...) or on all nodes, which is actually what Portainer agent does! Example: https://docs.portainer.io/start/install/server/swarm/linux
> How do you export the logs?
Docker supports multiple logging drivers: https://docs.docker.com/config/containers/logging/configure/
> How do you inject configs? /etc/environment, profile.d, systemd config, /etc/bestestapp/config?
Docker and Compose/Swarm support environment variables: https://docs.docker.com/compose/compose-file/#environment
If you need config files, you can also use bind mounts: https://docs.docker.com/storage/bind-mounts/
> What about secrets?
Docker supports secrets out of the box: https://docs.docker.com/engine/swarm/secrets/
> Service discovery? Is unbound/bind9?
Docker Swarm supports built in DNS, even allows for multiple separate networks: https://docs.docker.com/engine/swarm/networking/
> These items are best done in a standard way.
Agreed! Though I'd say that the only two options being "running everything on *nix directly" and "running everything in Kubernetes" is a false narrative! The former can work but can also lead to non-standard and error-prone environments with a horrible waste of human resources, whereas the latter can work but can also lead to overcomplicated and hard to debug environments with a horrible waste of human resources.
The best path for many folk probably lies somewhere in the middle, with Nomad/Swarm/Compose/Docker, regardless of what others might claim. The best path for folks interested in a DevOps career is probably running on cloud managed Kubernetes clusters and just using their APIs to lots of great results, not caring about how expensive that is or how easy it would be to self-host on-prem.
k3s
-
Ask HN: Are there any open source forks of nomad smd consul?
Opinionated meaning it picks, install, patches your CNI/Ingress/Load Balancer/DNS Server/Metrics Server/Monitoring Setup.
k3s is probably most well known as it ships with bunch of preinstall software: https://github.com/k3s-io/k3s so you can just start throwing yaml files at cluster and handling workloads. It's what I use for my homelab.
Paid things I've heard of include OpenStack and SideroLabs. Haven't used personally by SRE coworkers say good things about them.
-
Linux fu: getting started with systemd
For self-hosting I've found https://k3s.io to be really good from the SUSE people. Works on basically any Linux distro and makes self-hosting k8s not miserable.
-
Nix is a better Docker image builder than Docker's image builder
Yes it’s going to depend on which k8s distribution you’re using. We have work in-progress for k3s to natively support nix-snapshotter: https://github.com/k3s-io/k3s/pull/9319
For other distributions, nix-snapshotter works with official containerd releases so it’s just a matter of toml configuration and a systemd unit for nix-snapshotter.
We run Kubernetes outside of NixOS, but yes the NixOS modules provided by the nix-snapshotter certainly make it simple.
-
15 Options To Build A Kubernetes Playground (with Pros and Cons)
K3S: is a lightweight distribution of Kubernetes that is designed for resource-constrained environments. It is an excellent option for running Kubernetes on a virtual machine or cloud server.
- FLaNK 25 December 2023
-
K3s Traefik Ingress - configured for your homelab!
I recently purchased a used Lenovo M900 Think Centre (i7 with 32GB RAM) from eBay to expand my mini-homelab, which was just a single Synology DS218+ plugged into my ISP's router (yuck!). Since I've been spending a big chunk of time at work playing around with Kubernetes, I figured that I'd put my skills to the test and run a k3s node on the new server. While I was familiar with k3s before starting this project, I'd never actually run it before, opting for tools like kind (and minikube before that) to run small test clusters for my local development work.
- Best way to deploy K8s to single VPS for dev environment
-
Single docker compose stack on multiple hosts. But how?
Kubernetes - k3s distribution
-
Building a no-code Helm UI with Windmill - Part 1
I’ve created a local cluster with K3S and installing Windmill could not be simpler with just one chart to configure, which already has sane defaults to get started. For this demo we will also configure workers to passthrough environment variables to our scripts so that they have access to the Kubernetes API server for later.
-
Highly scalable Minecraft cluster
You should be familiar with Kubernetes and have set up a Kubernetes cluster. I recommend k3s.
What are some alternatives?
etcd - Distributed reliable key-value store for the most critical data of a distributed system
k0s - k0s - The Zero Friction Kubernetes
porter - Kubernetes powered PaaS that runs in your own cloud.
kubespray - Deploy a Production Ready Kubernetes Cluster
nf-faas-docker-stack - Experimental: Getting modern OpenFaaS CE to run on Swarm
Nomad - Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
microk8s - MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
kompose - Convert Compose to Kubernetes
Docker Compose - Define and run multi-container applications with Docker
OpenFaaS - OpenFaaS - Serverless Functions Made Simple
k9s - 🐶 Kubernetes CLI To Manage Your Clusters In Style!