docker-swarm-autoscaler
ketch
docker-swarm-autoscaler | ketch | |
---|---|---|
3 | 27 | |
70 | 655 | |
- | 0.3% | |
10.0 | 6.8 | |
over 4 years ago | 3 months ago | |
Ruby | Go | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
docker-swarm-autoscaler
-
Running auto-scalling docker services
If you want to have some sort of auto scaling, you will need to monitor to some extent though as this will be the signal for scaling up/down. I noticed that https://github.com/jcwimer/docker-swarm-autoscaler already includes the relevant prometheus configs required for just scaling by cpu.
-
Acorn: A lightweight PaaS for Kubernertes, from Rancher founders
Nomad, Docker Swarm and other solutions support most of these out of the box, Kubernetes is just the most popular and flexible (with which comes a lot of complexity) solution, it seems.
For example, even something as basic as Docker Swarm will see you a lot of the way through.
> How do you implement healthcheck?
Supported by Docker: https://docs.docker.com/engine/reference/builder/#healthchec...
> Does the loadbalancer know how the healthceck is implemented?
When the health checks pass in accordance with the above config, the container state will change from "starting" to "healthy" and traffic will be able to be routed to it. Until then you can have a web server or whatever show a different page/implement circuit breaking or whatever.
> How do you determine it's time to scale?
Docker Swarm doesn't have an abstraction for autoscaling, though there are a few community projects. One can feasibly even write something like that themselves in an evening: https://github.com/jcwimer/docker-swarm-autoscaler
That said, I mostly ignore this concern because I'm yet to see a workload that needs to dynamically scale in any number of private or government projects that I've worked on. Most of the time people want predictable infrastructure and being able to deal with backpressure (e.g. message queue), though that's different with startups.
> How do you implement always-on-process? service unit, initd, cron?
The service abstraction comes out of the box: https://docs.docker.com/engine/swarm/how-swarm-mode-works/se...
You might also want to decide how to best schedule it: wherever available, on a particular node (hostname/tag/...) or on all nodes, which is actually what Portainer agent does! Example: https://docs.portainer.io/start/install/server/swarm/linux
> How do you export the logs?
Docker supports multiple logging drivers: https://docs.docker.com/config/containers/logging/configure/
> How do you inject configs? /etc/environment, profile.d, systemd config, /etc/bestestapp/config?
Docker and Compose/Swarm support environment variables: https://docs.docker.com/compose/compose-file/#environment
If you need config files, you can also use bind mounts: https://docs.docker.com/storage/bind-mounts/
> What about secrets?
Docker supports secrets out of the box: https://docs.docker.com/engine/swarm/secrets/
> Service discovery? Is unbound/bind9?
Docker Swarm supports built in DNS, even allows for multiple separate networks: https://docs.docker.com/engine/swarm/networking/
> These items are best done in a standard way.
Agreed! Though I'd say that the only two options being "running everything on *nix directly" and "running everything in Kubernetes" is a false narrative! The former can work but can also lead to non-standard and error-prone environments with a horrible waste of human resources, whereas the latter can work but can also lead to overcomplicated and hard to debug environments with a horrible waste of human resources.
The best path for many folk probably lies somewhere in the middle, with Nomad/Swarm/Compose/Docker, regardless of what others might claim. The best path for folks interested in a DevOps career is probably running on cloud managed Kubernetes clusters and just using their APIs to lots of great results, not caring about how expensive that is or how easy it would be to self-host on-prem.
ketch
-
Acorn: A lightweight PaaS for Kubernertes, from Rancher founders
Here at Suse we looked at https://github.com/theketchio/ketch and the founder for Acorn did some diligence there. Is it a copy?
-
Helm is both "package manager" and "templating engine" - probably the best package manager but horrible template engine
An idea may be to look at something like Ketch, and potentially combine it with Pulumi, TF, or others. Here is an example
-
A simple application deployment framework for Kubernetes!!
You have some more “established” tools, such as Ketch but from what I’ve seen, many people are building it in house by using tools such as Helm, Crossplane, or others
-
Application deployment framework.
Pretty much what Ketch has been doing for a while already, and Ketch is part of a larger app platform
-
Acorn - the new cool kid for app deployment to Kubernetes
Pretty much what Ketch has been doing for a while now
-
Automatic generation of Manifest files.
Another option you have is to use open source projects like Ketch that can make this process more "developer friendly". Here is an example
-
Deploying Python apps on Kubernetes without complexities
Because of that, we have created an open-source project called Ketch to make life easier when deploying apps on K8s.
-
Nodejs App From Code To Kubernetes Cluster
The team is excited about enabling developers to focus on their application code instead of infrastructure. We would love it if you could show your support by starring the project on GitHub and sharing this article with your teammates.
-
Stronger abstraction for deployments
It might be worth having a look at the open source project Ketch
-
Deploying applications on Kubernetes using TypeScript
Instead, by combining the application-focused approach from Ketch with the IaC model from Pulumi, developers can have an application-focused layer they can leverage to quickly deploy their applications without getting into the underlying infrastructure details exposed by Kubernetes.
What are some alternatives?
etcd - Distributed reliable key-value store for the most critical data of a distributed system
kubevela - The Modern Application Platform.
k3s - Lightweight Kubernetes
helm - The Kubernetes Package Manager
porter - Kubernetes powered PaaS that runs in your own cloud.
porter - Porter enables you to package your application artifact, client tools, configuration and deployment logic together as an installer that you can distribute, and install with a single command.
nf-faas-docker-stack - Experimental: Getting modern OpenFaaS CE to run on Swarm
kind - Kubernetes IN Docker - local clusters for testing Kubernetes
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
cdk8s - Define Kubernetes native apps and abstractions using object-oriented programming
kompose - Convert Compose to Kubernetes
kustomize - Customization of kubernetes YAML configurations