docker-swarm-autoscaler
swarmsible-stacks
docker-swarm-autoscaler | swarmsible-stacks | |
---|---|---|
3 | 1 | |
70 | 16 | |
- | - | |
10.0 | 10.0 | |
over 4 years ago | about 1 year ago | |
Ruby | Shell | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
docker-swarm-autoscaler
-
Running auto-scalling docker services
If you want to have some sort of auto scaling, you will need to monitor to some extent though as this will be the signal for scaling up/down. I noticed that https://github.com/jcwimer/docker-swarm-autoscaler already includes the relevant prometheus configs required for just scaling by cpu.
-
Acorn: A lightweight PaaS for Kubernertes, from Rancher founders
Nomad, Docker Swarm and other solutions support most of these out of the box, Kubernetes is just the most popular and flexible (with which comes a lot of complexity) solution, it seems.
For example, even something as basic as Docker Swarm will see you a lot of the way through.
> How do you implement healthcheck?
Supported by Docker: https://docs.docker.com/engine/reference/builder/#healthchec...
> Does the loadbalancer know how the healthceck is implemented?
When the health checks pass in accordance with the above config, the container state will change from "starting" to "healthy" and traffic will be able to be routed to it. Until then you can have a web server or whatever show a different page/implement circuit breaking or whatever.
> How do you determine it's time to scale?
Docker Swarm doesn't have an abstraction for autoscaling, though there are a few community projects. One can feasibly even write something like that themselves in an evening: https://github.com/jcwimer/docker-swarm-autoscaler
That said, I mostly ignore this concern because I'm yet to see a workload that needs to dynamically scale in any number of private or government projects that I've worked on. Most of the time people want predictable infrastructure and being able to deal with backpressure (e.g. message queue), though that's different with startups.
> How do you implement always-on-process? service unit, initd, cron?
The service abstraction comes out of the box: https://docs.docker.com/engine/swarm/how-swarm-mode-works/se...
You might also want to decide how to best schedule it: wherever available, on a particular node (hostname/tag/...) or on all nodes, which is actually what Portainer agent does! Example: https://docs.portainer.io/start/install/server/swarm/linux
> How do you export the logs?
Docker supports multiple logging drivers: https://docs.docker.com/config/containers/logging/configure/
> How do you inject configs? /etc/environment, profile.d, systemd config, /etc/bestestapp/config?
Docker and Compose/Swarm support environment variables: https://docs.docker.com/compose/compose-file/#environment
If you need config files, you can also use bind mounts: https://docs.docker.com/storage/bind-mounts/
> What about secrets?
Docker supports secrets out of the box: https://docs.docker.com/engine/swarm/secrets/
> Service discovery? Is unbound/bind9?
Docker Swarm supports built in DNS, even allows for multiple separate networks: https://docs.docker.com/engine/swarm/networking/
> These items are best done in a standard way.
Agreed! Though I'd say that the only two options being "running everything on *nix directly" and "running everything in Kubernetes" is a false narrative! The former can work but can also lead to non-standard and error-prone environments with a horrible waste of human resources, whereas the latter can work but can also lead to overcomplicated and hard to debug environments with a horrible waste of human resources.
The best path for many folk probably lies somewhere in the middle, with Nomad/Swarm/Compose/Docker, regardless of what others might claim. The best path for folks interested in a DevOps career is probably running on cloud managed Kubernetes clusters and just using their APIs to lots of great results, not caring about how expensive that is or how easy it would be to self-host on-prem.
swarmsible-stacks
-
Running auto-scalling docker services
There are many ways to achieve this. The simplest is to stick with what you have and sprinkle some prometheus on top. If you want to know how to get a prometheus + Grafana etc Stack up and running, check out https://github.com/neuroforgede/swarmsible-stacks/tree/main/02_monitoring where we have a modernized swarmprom which includes prometheus.
What are some alternatives?
etcd - Distributed reliable key-value store for the most critical data of a distributed system
OpenFaaS - OpenFaaS - Serverless Functions Made Simple
k3s - Lightweight Kubernetes
nf-faas-docker-stack - Experimental: Getting modern OpenFaaS CE to run on Swarm
porter - Kubernetes powered PaaS that runs in your own cloud.
faasd - A lightweight & portable faas engine
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
kompose - Convert Compose to Kubernetes
runtime - A simple application deployment framework built on Kubernetes