faasd
docker-swarm-autoscaler
faasd | docker-swarm-autoscaler | |
---|---|---|
20 | 3 | |
2,857 | 70 | |
0.9% | - | |
6.8 | 10.0 | |
14 days ago | over 4 years ago | |
Go | Ruby | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
faasd
-
Running auto-scalling docker services
If you don't want to start with Kubernetes, faasd (https://github.com/openfaas/faasd) might be worth a look.
-
How to setup a containerized python environment? Function as a Service or an alternative solution for a Python execution environment.
I found OpenFaaS and its little sibling faasd. To my understanding they expose single functions through a REST API for easy interfacing. It sounds nice but OpenFaaS is overkill and I had trouble setting up faasd.
-
A Deep Dive into Golang for OpenFaaS Functions
Hope you find this interesting / useful - whether you're using AWS Lambda, OpenFaaS, or just plain old Go binaries. If you're wondering whether OpenFaaS requires K8s, you also have faasd as an option.
-
Any Easy to use self hosted cloud function service ?
Have you seen faasd from openfaas? Easier to setup and doesnât require a full blown kubernetes cluster: https://github.com/openfaas/faasd
-
Getting Started with Faasd
I'm the original creator of faasd, and just found this post on the Internet. What I liked was how he discovered faasd and found some unique value in it.
I wanted to share it all with a broader audience here. His bootstrap is a little bit convoluted, if you check out GitHub, we have a simple installer script and an eBook as a reference manual.
https://github.com/openfaas/faasd
-
[INFRA PART 1] Serverless Highscore Go API with Faasd and CockroachDB
First of all, you need faas-cli in your client local machine. You should get the binary and set it to your path. If you are on linux or mac machine, moving the binary into "/usr/local/bin" will work. For windows machines, you need to set environment variables in Control Panel>System and Security>System>Advanced System Settings(single-binary-faas-cli)
-
Self hosted vercel alternative ?
Have you considered OpenFaaS - https://github.com/openfaas/faasd
-
Azure function alternative
I am on the same ship, after some digging around I'm considering OpenFaaS, checkout - https://github.com/openfaas/faasd
- Show HN: faasd (0.13.0) upgraded for containerd v1.5.4
-
Looking for opinions on solid open source FaaS that support go.
If you use https://github.com/openfaas/faasd, you can skip the whole Kubernetes setup as well.
docker-swarm-autoscaler
-
Running auto-scalling docker services
If you want to have some sort of auto scaling, you will need to monitor to some extent though as this will be the signal for scaling up/down. I noticed that https://github.com/jcwimer/docker-swarm-autoscaler already includes the relevant prometheus configs required for just scaling by cpu.
-
Acorn: A lightweight PaaS for Kubernertes, from Rancher founders
Nomad, Docker Swarm and other solutions support most of these out of the box, Kubernetes is just the most popular and flexible (with which comes a lot of complexity) solution, it seems.
For example, even something as basic as Docker Swarm will see you a lot of the way through.
> How do you implement healthcheck?
Supported by Docker: https://docs.docker.com/engine/reference/builder/#healthchec...
> Does the loadbalancer know how the healthceck is implemented?
When the health checks pass in accordance with the above config, the container state will change from "starting" to "healthy" and traffic will be able to be routed to it. Until then you can have a web server or whatever show a different page/implement circuit breaking or whatever.
> How do you determine it's time to scale?
Docker Swarm doesn't have an abstraction for autoscaling, though there are a few community projects. One can feasibly even write something like that themselves in an evening: https://github.com/jcwimer/docker-swarm-autoscaler
That said, I mostly ignore this concern because I'm yet to see a workload that needs to dynamically scale in any number of private or government projects that I've worked on. Most of the time people want predictable infrastructure and being able to deal with backpressure (e.g. message queue), though that's different with startups.
> How do you implement always-on-process? service unit, initd, cron?
The service abstraction comes out of the box: https://docs.docker.com/engine/swarm/how-swarm-mode-works/se...
You might also want to decide how to best schedule it: wherever available, on a particular node (hostname/tag/...) or on all nodes, which is actually what Portainer agent does! Example: https://docs.portainer.io/start/install/server/swarm/linux
> How do you export the logs?
Docker supports multiple logging drivers: https://docs.docker.com/config/containers/logging/configure/
> How do you inject configs? /etc/environment, profile.d, systemd config, /etc/bestestapp/config?
Docker and Compose/Swarm support environment variables: https://docs.docker.com/compose/compose-file/#environment
If you need config files, you can also use bind mounts: https://docs.docker.com/storage/bind-mounts/
> What about secrets?
Docker supports secrets out of the box: https://docs.docker.com/engine/swarm/secrets/
> Service discovery? Is unbound/bind9?
Docker Swarm supports built in DNS, even allows for multiple separate networks: https://docs.docker.com/engine/swarm/networking/
> These items are best done in a standard way.
Agreed! Though I'd say that the only two options being "running everything on *nix directly" and "running everything in Kubernetes" is a false narrative! The former can work but can also lead to non-standard and error-prone environments with a horrible waste of human resources, whereas the latter can work but can also lead to overcomplicated and hard to debug environments with a horrible waste of human resources.
The best path for many folk probably lies somewhere in the middle, with Nomad/Swarm/Compose/Docker, regardless of what others might claim. The best path for folks interested in a DevOps career is probably running on cloud managed Kubernetes clusters and just using their APIs to lots of great results, not caring about how expensive that is or how easy it would be to self-host on-prem.
What are some alternatives?
OpenFaaS - OpenFaaS - Serverless Functions Made Simple
etcd - Distributed reliable key-value store for the most critical data of a distributed system
nuclio - High-Performance Serverless event and data processing platform
k3s - Lightweight Kubernetes
fission - Fast and Simple Serverless Functions for Kubernetes
porter - Kubernetes powered PaaS that runs in your own cloud.
telegraf-webhooks-plex - An external Telegraf Plugin for listening to Plex Webhooks.
nf-faas-docker-stack - Experimental: Getting modern OpenFaaS CE to run on Swarm
hetzner-terraform-faasd - Getting started easily with faasd on top of debian OS for Hetzner Cloud
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
up - Deploy infinitely scalable serverless apps, apis, and sites in seconds to AWS.
kompose - Convert Compose to Kubernetes