fn
OpenFaaS
Our great sponsors
fn | OpenFaaS | |
---|---|---|
11 | 56 | |
5,650 | 24,515 | |
0.8% | 0.8% | |
2.6 | 6.8 | |
8 months ago | 13 days ago | |
Go | Go | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
fn
-
I asked 100 devs why they aren't shipping faster. Here's what I learned
Not always. Check out how Oracle Cloud does it. It's a hosted version of an open source stack called fn, which you can run fully locally via a simple CLI tool.
https://fnproject.io
- XFaaS: Hyperscale and Low Cost Serverless Functions at Meta
-
GraalOS: Containerless instant-on cloud functions for Java
There's not much info out there but I'll describe what I got from reading the blog posts and searching for it.
"Serverless" stuff like the (proprietary) Lambda or (open source, https://fnproject.io/) OCI Cloud Functions are based on a few ideas:
1. Use Linux syscalls+x86 as the target ABI/ISA. Thus programs are Docker containers and because the kernel is a bit too much C to trust, maybe also custom virtual machines for sandboxing.
2. Because downloading a full blown Linux userspace and starting it up inside a new virtual machine can be slow, then add a variety of hacks on top to try and make a start/stop model look like an always on service. For example by having always-on instances (which means serverless now has servers again), by using Docker layers and other stuff.
GraalOS asks the following question: what if we toss Linux and x86 as the API? Is there a way to do server-side computing better?
This question just leads to more questions:
• What do we replace it with?
• What are the benefits?
GraalOS starts by saying, let's replace Linux/native code with the Java specifications instead. This gives you a relatively large and consistent yet open source surface area for doing all the server-side basics you need like IO, threading, memory management and so on. You can then layer Truffle (from the same team) on top to get other languages like JavaScript, Python, Ruby, WASM, Rust or C++ (via LLVM bitcode) and so on. All of these running on top of the JVM rather than Linux.
In this model the JVM isn't an operating system, exactly, but it might as well be because you don't have access to the underlying kernel at all. There's no way to make system calls in this model that aren't mediated by the standard libraries of your language. And this is enforced via two very different sandboxing technologies:
1. The server controls the compiler.
2. Intel MPX and whatever the AMD equivalent is.
Controlling the compiler is how you implement software level sandboxing. Because all code running on the CPU is created by your own compiler which the developer cannot choose (like in a browser), you can implement and impose whatever policies you like. The most obvious is no syscalls, no unsafe memory accesses and so on. But then you may want more than that, for example, how do you stop Spectre attacks extracting secrets from the address space? To which the answer can be the CPU's "Memory Protection Keys". This is a very, very fast and lightweight way to do a kind of in-process context switch. You can associate page ranges with a "key" and then put that key into a special register to control what memory ranges are currently accessible. It's like an additional set of permissions over what the kernel has set up. Because you control the compiler, you can ensure that only system code can alter the current memory protection key, and then this lets you compile and execute untrusted code without worrying about speculation attacks.
So that's the theory, what's the benefits?
The first benefit is that you don't need containers anymore. GraalVM has the "native image" tool that produces native Linux standalone executables from JVM apps, like Go does. And those JVM apps can be interpreters or JIT compilers for other Truffle languages as well. So now, you no longer need to drag around half an Ubuntu install for each app you run. It means programs can be moved between servers way faster because there's less to copy, and anyway Oracle Cloud has notoriously excellent networking, from what I've read, so new instances can be spun up much faster than before. And native images start ~instantly because they are fully native code and have a persisted heap snapshot computed as part of the build process, so they effectively start already initialized. And then finally in some cases they can do snapshotting post-startup too, for fast suspend/resume, and the compiler knows how to do MPX keys.
So with all this done, you can produce a server side infrastructure in which programs are just shared libraries loaded and unloaded into pre-warmed HTTP servers, yet still isolated and protected from each other, and because things are way faster you can actually just start and stop these servers genuinely on demand on a per-request basis. There's no need to charge users for idle minutes as you try to avoid a shutdown/startup cycle. In turn that means a lot of complexity just boils away.
BTW, I just checked and it turns out that Oracle's "free cloud" deal applies to functions as well. You get like 2 million free activations a month or something, and 400k "gigabyte memory-seconds". So if that pricing is sustained with this then it means a lot of types of JVM servers will just be completely free to host, because native image also reduces memory consumption a lot.
At least that's my guess as to what's going on. But it's not launched yet, just announced. I guess we'll have to see what it's like for real when it's available.
- Oracle Cloud is having a major outage
-
My very first Hackathon and my first Dev.to post
Functions: Scalable, multi-tenant serverless functions based on Fn
-
Any self-hosted equivalent to AWS Lambdas?
OpenFAAS or FN Project are options
- Self hosted Aws Lamda / FaaS alternative
-
Java Serverless on Steroids with fn+GraalVM Hands-On
Install fn (refer to https://fnproject.io/ for latest instructions)
-
Don't start with microservices – monoliths are your friend
I disagree, microservices are an architectural concept related to the software, not to the infrastructure.
Whether you are using containers or VPS or serverless or bare metal for your infrastructure, that's completely unrelated to the concept of microservices: you can deploy either a monolith or microservices in any of the above.
As an example you can deploy a monolith on Lambda[1] or you can deploy microservices on bare metal using one of the several self managed serverless engines available[2].
[1] see e.g. https://claudiajs.com/tutorials/serverless-express.html or https://blog.logrocket.com/zappa-and-aws-lambda-for-serverle...
[2] see e.g. https://fnproject.io/ and https://knative.dev/
-
Serverless functions with FN project
Still, for today I would like to talk to you about the FN project, an open-source alternative.
OpenFaaS
- Serverless Functions, Made Simple
-
The 2024 Web Hosting Report
Serverless functions are now offered by many cloud providers, as well as having options like OpenFaaS, Knative, Apache's Openwhisk and more from the open source community that run in environments ranging from one server all the way up to globally replicated private clusters.
-
⚡⚡ Level Up Your Cloud Experience with These 7 Open Source Projects 🌩️
OpenFaaS
-
Spinning up docker containers from http requests
Did you consider running knative or openfaas? https://github.com/openfaas/faas
-
.NET 8 Standalone 50% Smaller On Linux
Anyone knows other alternatives for Azure Functions, but for DIY hosting? ( eg. OpenFaas - https://www.openfaas.com/ )
- A question about how pods creation with requests
-
What exists on the spectrum between a cron job and airflow?
Maybe OpenFaaS with grafana and slack notifications for non-200 responses?
-
I need a custom resource somewhere between a job and cron job -- does it exist?
OpenFaaS - https://www.openfaas.com
-
Hosting strategy suggestions
By the way, if your organization is leveraging EKS as a platform and your DevOps team is willing to enable this operator, there's an exciting tool called OpenFaaS. Essentially, it enables you to host your Lambda functions on your own infrastructure instead of relying on the public cloud provider.
-
Questions for Heroku-like Project
This is where I see K8S coming in – teachers can provide dev deployments that are setup for students to learn. Teachers can also provide containers that run automated tests against the student containers for assessment! Plus, we can smooth over some of the git workflow stuff for the ripest of beginners; we can integrate with github to sync their work on our platform to repositories on their github account, so that they can really take ownership of the work they do on the platform. Last, students can graduate their work from development into production very easily, since we can take the base images + student diffs, build a new "prod" image for the student. We can run students' prod work on "serverless" K8S frameworks like fission or OpenFaas to be able to host many low-traffic "production" apps at the same time.
What are some alternatives?
OpenWhisk - Apache OpenWhisk is an open source serverless cloud platform
IronFunctions - IronFunctions - the serverless microservices platform by
fission - Fast and Simple Serverless Functions for Kubernetes
LocalStack - 💻 A fully functional local AWS cloud stack. Develop and test your cloud & Serverless apps offline
n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.
dapr - Dapr is a portable, event-driven, runtime for building distributed applications across cloud and edge.
nuclio - High-Performance Serverless event and data processing platform
vinyl-json - Automatic json instances for Data.Vinyl
Appwrite - Build like a team of hundreds_
faas-netes - Serverless Functions For Kubernetes