gvisor
firecracker
gvisor | firecracker | |
---|---|---|
79 | 81 | |
16,702 | 28,630 | |
1.2% | 1.5% | |
9.8 | 9.9 | |
7 days ago | 1 day ago | |
Go | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gvisor
-
Building untrusted container images safely at scale
I recommend gvisor: https://gvisor.dev/
If you want to learn more about this subject the keyword you’re looking for is “multitenancy”
Docker’s container runtime is not really a safe way to run untrusted code. I don’t recommend relying on it.
Also, why would an isolated vm prevent fetch? You can give your users NAT addresses to let them make outbound network calls. I am putting the finishing touches on a remote IDE that does exactly that.
-
Building an Online Code Compiler: A Complete Guide
gVisor - Application kernel for containers
-
Kubernetes Without Docker: Why Container Runtimes Are Changing the Game in 2025
gVisor: Sandboxed Container Runtime by Google For when your security team actually audits things.
-
Reverse Engineering OpenAI Code Execution to make it run C and JavaScript
> why would they be running such an old Linux?
They didn't.
OP misunderstood what gVisor is, and thought gVisor's uname() return [1] was from the actual kernel. It's not. That's the whole point of gVisor. You don't get to talk to the real kernel.
[1] https://github.com/google/gvisor/blob/c68fb3199281d6f8fe02c7...
-
WASM Will Replace Containers
You can use something like https://github.com/google/gvisor as a container runtime for podman or docker. It's a good hybrid between VMs and containers. The container is put into sort of VM via kvm, but it does not supply a kernel and talks to a fake one. This means that security boundary is almost as strong as VM, but mostly everything will work like in a normal container.
E.g. here's I can read host filesystem even though uname says weird things about the kernel container is running in:
$ sudo podman run -it --runtime=/usr/bin/runsc_wrap -v /:/app debian:bookworm /bin/bash
-
Lies we tell ourselves to keep using Golang
To be pedantic for a moment...
> You can't use Go to write a kernel ...
Not a production kernel, but MIT did use Go to "study the performance trade-offs of using a high-level language with garbage collection to implement a kernel" [1]
There is also gVisor [2] which implements, as best as I can describe, a kernel in user space. It's intent is to intercept syscalls made in containers and to redirect its execution in a sandbox.
> ... program a microcontroller ...
I'm not sure if one would classify this as a microcontroller, but USB Armory did write a, iirc, Go compliant runtime for bare metal ARM and RISC-V [3]
[1] https://github.com/mit-pdos/biscuit
[2] https://gvisor.dev/
[3] https://github.com/usbarmory/tamago
-
Comparing 3 Docker container runtimes - Runc, gVisor and Kata Containers
Although the documentation also mentions "youki", that is mentioned as a "drop-in replacement" of the default runtime basically doing the same, so let's stick with runc. The second runtime will be Kata runtime from Kata containers, since it runs small virtual machines which is good for showing how differently it uses the CPU and memory. This also adds a higher level of isolation with some downsides as well. And the third runtime will be runsc from gVisor which is a perfect third runtime to see how we can run containers and still have a little more secure isolation. I will show how we can recognize the differences by running commands from the isolated environments and from the host.
-
GVisor: Linux-Compatible Sandbox
I find the README of the repo much better to quickly understand what this software is and isn't.
https://github.com/google/gvisor
-
Unfashionably secure: why we use isolated VMs
If you think about it virtualization is just a narrowing of the application-kernel interface. In a standard setting the application has a wide kernel interface available to it with dozens (ex. seccomp) to 100's of syscalls. A vulnerablility in any one of which could result in complete system compromise.
With virtualization the attack surface is narrowed to pretty much just the virtualization interface.
The problem with current virtualization (or more specifically, the VMM's) is that it can be cumbersome, for example memory management is a serious annoyance. The kernel is built to hog memory for cache and etc. but you don't want the guest to be doing that - since you want to overcommit memory as guests will rarely use 100% of what is given to them (especially when the guest is just a jailed singular application), workarounds such as free page reporting and drop_caches hacks exist.
I would expect eventually to see high performance custom kernels for a application jails - for example: gVisor[1] acts as a syscall interceptor (and can use KVM too!) and a custom kernel. Or a modified linux kernel with patched pain points for the guest.
[1] <https://gvisor.dev/>
- Syd the perhaps most sophisticated sandbox for Linux
firecracker
- Entropy for Clones
- Firecracker Entropy for VM Clones
-
Show HN: Ephemeral VMs in 1 Microsecond
Well, FireCracker has a jailer process: https://github.com/firecracker-microvm/firecracker/blob/main...
- Show HN: Prisma Postgres. Runs on bare metal and unikernels
-
Show HN: Desktop Sandbox for Secure Cloud Computer User
Hello, I'm the CEO of the company that built this - E2B [0]. We're building infrastructure for AI code interpreting. Companies like Perplexity are using us.
We're using Firecrackers [1] to power our sandboxes. Funnily enough, we had this repo sitting on our GitHub for about 6 months. We originally made this for one of our customers because they were running evals on the desktop-like environment with GUI for their model.
You can use PyAutoGUI [2] to control the whole environment programmatically.
The desktop-like environment is based on Linux and Xfce [3] at the moment. We chose Xfce because it's a fast and lightweight environment that's also popular and actively supported. However, this Sandbox template is fully customizable and you can create your own desktop environment.
Let me know if you have any questions!
[0] https://e2b.dev
[1] https://github.com/firecracker-microvm/firecracker
[2] https://pyautogui.readthedocs.io/
[3] https://www.xfce.org/
-
I'm Funding Ladybird Because I Can't Fund Firefox
What he said is true, AWS uses Rust heavily in some of AWS core systems https://aws.amazon.com/blogs/devops/why-aws-is-the-best-plac....
Some of the open source projects you can find are AWS Firecracker https://github.com/firecracker-microvm/firecracker and Cloudflare Pingora https://github.com/cloudflare/pingora
-
Lambda Internals: Why AWS Lambda Will Not Help With Machine Learning
This architecture leverages microVMs for rapid scaling and high-density workloads. But does it work for GPU? The answer is no. You can look at the old 2019 GitHub issue and the comments to it to get the bigger picture of why it is so.
-
Show HN: Add AI code interpreter to any LLM via SDK
Hi, I'm the CEO of the company that built this SDK.
We're a company called E2B [0]. We're building and open-source [1] secure environments for running untrusted AI-generated code and AI agents. We call these environments sandboxes and they are built on top of micro VM called Firecracker [2].
You can think of us as giving small cloud computers to LLMs.
We recently created a dedicated SDK for building custom code interpreters in Python or JS/TS. We saw this need after a lot of our users have been adding code execution capabilities to their AI apps with our core SDK [3]. These use cases were often centered around AI data analysis so code interpreter-like behavior made sense
The way our code interpret SDK works is by spawning an E2B sandbox with Jupyter Server. We then communicate with this Jupyter server through Jupyter Kernel messaging protocol [4].
We don't do any wrapping around LLM, any prompting, or any agent-like framework. We leave all of that on users. We're really just a boring code execution layer that sats at the bottom that we're building specifically for the future software that will be building another software. We work with any LLM. Here's how we added code interpreter to Claude [5].
Our long-term plan is to build an automated AWS for AI apps and agents.
Happy to answer any questions and hear feedback!
[0] https://e2b.dev/
[1] https://github.com/e2b-dev
[2] https://github.com/firecracker-microvm/firecracker
[3] https://e2b.dev/docs
[4] https://jupyter-client.readthedocs.io/en/latest/messaging.ht...
[5] https://github.com/e2b-dev/e2b-cookbook/blob/main/examples/c...
-
Fly.it Has GPUs Now
As far as I know, Fly uses Firecracker for their VMs. I've been following Firecracker for a while now (even using it in a project), and they don't support GPUs out of the box (and have no plan to support it [1]).
I'm curious to know how Fly figured their own GPU support with Firecracker. In the past they had some very detailed technical posts on how they achieved certain things, so I'm hoping we'll see one on their GPU support in the future!
[1]: https://github.com/firecracker-microvm/firecracker/issues/11...
-
MotorOS: a Rust-first operating system for x64 VMs
I pass through a GPU and USB hub to a VM running on a machine in the garage. An optical video cable and network compatible USB extender brings the interface to a different room making it my primary “desktop” computer (and an outdated laptop as a backup device). Doesn’t get more silent and cool than this. Another VM on the garage machine gets a bunch of hard drives passed through to it.
That said, hardware passthrough/VFIO is likely out of the current realistic scope for this project. VM boot times can be optimized if you never look for hardware to initialize in the first place. Though they are still likely initializing a network interface of some sort.
“MicroVM” seems to be a term used when as much as possible is stripped from a VM, such as with https://github.com/firecracker-microvm/firecracker
What are some alternatives?
sysbox - An open-source, next-generation "runc" that empowers rootless containers to run workloads such as Systemd, Docker, Kubernetes, just like VMs.
cloud-hypervisor - A Virtual Machine Monitor for modern Cloud workloads. Features include CPU, memory and device hotplug, support for running Windows and Linux guests, device offload with vhost-user and a minimal compact footprint. Written in Rust with a strong focus on security.
kata-containers - Kata Containers is an open source project and community working to build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs. https://katacontainers.io/
bottlerocket - An operating system designed for hosting containers
podman - Podman: A tool for managing OCI containers and pods.
libkrun - A dynamic library providing Virtualization-based process isolation capabilities