fn
docs
fn | docs | |
---|---|---|
11 | 29 | |
5,668 | 4,361 | |
0.6% | 1.2% | |
2.6 | 9.3 | |
9 months ago | 3 days ago | |
Go | HTML | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
fn
-
I asked 100 devs why they aren't shipping faster. Here's what I learned
Not always. Check out how Oracle Cloud does it. It's a hosted version of an open source stack called fn, which you can run fully locally via a simple CLI tool.
https://fnproject.io
- XFaaS: Hyperscale and Low Cost Serverless Functions at Meta
-
GraalOS: Containerless instant-on cloud functions for Java
There's not much info out there but I'll describe what I got from reading the blog posts and searching for it.
"Serverless" stuff like the (proprietary) Lambda or (open source, https://fnproject.io/) OCI Cloud Functions are based on a few ideas:
1. Use Linux syscalls+x86 as the target ABI/ISA. Thus programs are Docker containers and because the kernel is a bit too much C to trust, maybe also custom virtual machines for sandboxing.
2. Because downloading a full blown Linux userspace and starting it up inside a new virtual machine can be slow, then add a variety of hacks on top to try and make a start/stop model look like an always on service. For example by having always-on instances (which means serverless now has servers again), by using Docker layers and other stuff.
GraalOS asks the following question: what if we toss Linux and x86 as the API? Is there a way to do server-side computing better?
This question just leads to more questions:
• What do we replace it with?
• What are the benefits?
GraalOS starts by saying, let's replace Linux/native code with the Java specifications instead. This gives you a relatively large and consistent yet open source surface area for doing all the server-side basics you need like IO, threading, memory management and so on. You can then layer Truffle (from the same team) on top to get other languages like JavaScript, Python, Ruby, WASM, Rust or C++ (via LLVM bitcode) and so on. All of these running on top of the JVM rather than Linux.
In this model the JVM isn't an operating system, exactly, but it might as well be because you don't have access to the underlying kernel at all. There's no way to make system calls in this model that aren't mediated by the standard libraries of your language. And this is enforced via two very different sandboxing technologies:
1. The server controls the compiler.
2. Intel MPX and whatever the AMD equivalent is.
Controlling the compiler is how you implement software level sandboxing. Because all code running on the CPU is created by your own compiler which the developer cannot choose (like in a browser), you can implement and impose whatever policies you like. The most obvious is no syscalls, no unsafe memory accesses and so on. But then you may want more than that, for example, how do you stop Spectre attacks extracting secrets from the address space? To which the answer can be the CPU's "Memory Protection Keys". This is a very, very fast and lightweight way to do a kind of in-process context switch. You can associate page ranges with a "key" and then put that key into a special register to control what memory ranges are currently accessible. It's like an additional set of permissions over what the kernel has set up. Because you control the compiler, you can ensure that only system code can alter the current memory protection key, and then this lets you compile and execute untrusted code without worrying about speculation attacks.
So that's the theory, what's the benefits?
The first benefit is that you don't need containers anymore. GraalVM has the "native image" tool that produces native Linux standalone executables from JVM apps, like Go does. And those JVM apps can be interpreters or JIT compilers for other Truffle languages as well. So now, you no longer need to drag around half an Ubuntu install for each app you run. It means programs can be moved between servers way faster because there's less to copy, and anyway Oracle Cloud has notoriously excellent networking, from what I've read, so new instances can be spun up much faster than before. And native images start ~instantly because they are fully native code and have a persisted heap snapshot computed as part of the build process, so they effectively start already initialized. And then finally in some cases they can do snapshotting post-startup too, for fast suspend/resume, and the compiler knows how to do MPX keys.
So with all this done, you can produce a server side infrastructure in which programs are just shared libraries loaded and unloaded into pre-warmed HTTP servers, yet still isolated and protected from each other, and because things are way faster you can actually just start and stop these servers genuinely on demand on a per-request basis. There's no need to charge users for idle minutes as you try to avoid a shutdown/startup cycle. In turn that means a lot of complexity just boils away.
BTW, I just checked and it turns out that Oracle's "free cloud" deal applies to functions as well. You get like 2 million free activations a month or something, and 400k "gigabyte memory-seconds". So if that pricing is sustained with this then it means a lot of types of JVM servers will just be completely free to host, because native image also reduces memory consumption a lot.
At least that's my guess as to what's going on. But it's not launched yet, just announced. I guess we'll have to see what it's like for real when it's available.
- Oracle Cloud is having a major outage
-
My very first Hackathon and my first Dev.to post
Functions: Scalable, multi-tenant serverless functions based on Fn
-
Any self-hosted equivalent to AWS Lambdas?
OpenFAAS or FN Project are options
- Self hosted Aws Lamda / FaaS alternative
-
Java Serverless on Steroids with fn+GraalVM Hands-On
Install fn (refer to https://fnproject.io/ for latest instructions)
-
Don't start with microservices – monoliths are your friend
I disagree, microservices are an architectural concept related to the software, not to the infrastructure.
Whether you are using containers or VPS or serverless or bare metal for your infrastructure, that's completely unrelated to the concept of microservices: you can deploy either a monolith or microservices in any of the above.
As an example you can deploy a monolith on Lambda[1] or you can deploy microservices on bare metal using one of the several self managed serverless engines available[2].
[1] see e.g. https://claudiajs.com/tutorials/serverless-express.html or https://blog.logrocket.com/zappa-and-aws-lambda-for-serverle...
[2] see e.g. https://fnproject.io/ and https://knative.dev/
-
Serverless functions with FN project
Still, for today I would like to talk to you about the FN project, an open-source alternative.
docs
-
Knative Serverless in 2024
I could provide a big overview of how Knative works, but in this little tutorial I want to show you the basic installation and configuration and how to deploy your first Knative service.
-
The 2024 Web Hosting Report
Serverless functions are now offered by many cloud providers, as well as having options like OpenFaaS, Knative, Apache's Openwhisk and more from the open source community that run in environments ranging from one server all the way up to globally replicated private clusters.
- XFaaS: Hyperscale and Low Cost Serverless Functions at Meta
- Serverless Framework alternatives for data engineering with AWS Lambda?
- Best serverless framework for migrating microservices on Kubernetes in an on-premises open-source environment ?
-
I am Kailash Nadh, hobbyist developer, CTO at Zerodha. AMA.
1 - https://knative.dev/docs/
-
Serverless Self-Hosted Kubernetes (Small Team)
The usual product I see for serverless on Kubernetes is Knative: https://knative.dev/docs/
-
Vendor Independent Serverless for Open Source
Check out Knative.
-
I need a custom resource somewhere between a job and cron job -- does it exist?
AWS Lambda Google Cloud Functions https://knative.dev/docs/
- Any ideas for how to complete 100 vCPU-seconds worth of tasks in less than 3 seconds?
What are some alternatives?
OpenFaaS - OpenFaaS - Serverless Functions Made Simple
CSharpFunctionalExtensions - Functional extensions for C#
OpenWhisk - Apache OpenWhisk is an open source serverless cloud platform
protondb_faq - FAQ for Protondb.com
fission - Fast and Simple Serverless Functions for Kubernetes
dxvk-async
n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.
nuclio - High-Performance Serverless event and data processing platform
xplorer - Xplorer, a customizable, modern file manager
vinyl-json - Automatic json instances for Data.Vinyl
faasm - High-performance stateful serverless runtime based on WebAssembly