workerd
js-compute-runtime
workerd | js-compute-runtime | |
---|---|---|
41 | 8 | |
6,438 | 200 | |
2.2% | -0.5% | |
9.9 | 9.5 | |
2 days ago | 3 days ago | |
C++ | JavaScript | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
workerd
- Wrapping My Mind Around Node.js Runtimes
-
Edge Scripting: Build and run applications at the edge
WorkerD isn't anywhere near a "cutdown version of Chromium," it is an incredible platform with years of engineering put into it, from some of the people behind very similar and successful products (GAE, Protocol Buffers, to name some).
WorkerD is open source: https://github.com/cloudflare/workerd
I personally am not a fan of Deno because of how it split the Node JS ecosystem, so that is not a benefit in my eyes. Of course, Workers can run Rust.
Nothing you said here necessitates an API difference.
-
Our container platform is in production. It has GPUs. Here's an early look
You can't really run the Worker code without modifications somewhere else afaik (unless you're using something like Hono with an adapter). And for most use cases, you're not going to be using Workers without KV, DO, etc.
I've hit a bunch of issues and limitations with Wrangler over the years.
Eg:
https://github.com/cloudflare/workers-sdk/issues/2964
https://github.com/cloudflare/workerd/issues/1897
-
How To Self-Host Cloudflare
Workerd is a JavaScript & WebAssembly-based runtime that powers Cloudflare Workers and other related technologies. You can think of it like the Node.js runtime used to execute JavaScript files. Workerd has its differences from Node.js, however, you can self-host it on any machine.
-
Cloudflare acquires PartyKit to allow developers to build real-time multi-user
Standards bodies only standardize things after they've been proven to work. You can't standardize a new idea before offering it to the market. It's hard enough to get just one vendor to experiment with an idea (it literally took me years to convince everyone inside Cloudflare that we should build Durable Objects). Getting N competing vendors to agree on it -- before anything has been proven in the market -- is simply not possible.
But the Durable Objects API is not complicated and there's nothing stopping competing platforms from building a compatible product if they want. Much of the implementation is open source, even. In fact, if you build an app on DO but decide you don't want to host it on Cloudflare, you can self-host it on workerd:
https://github.com/cloudflare/workerd
-
Python Cloudflare Workers
In any case, I welcome this initiative with my open hands and look forward all the cool apps that people will now build with this!
[1] https://pyodide.org/
[2] https://github.com/cloudflare/workerd/blob/main/docs/pyodide...
[3] https://github.com/cloudflare/workerd/pull/1875
-
LLRT: A low-latency JavaScript runtime from AWS
For ref:
- https://blog.cloudflare.com/workerd-open-source-workers-runt...
- https://github.com/cloudflare/workerd
-
A list of JavaScript engines, runtimes, interpreters
workerd
-
WinterJS
I think this is for people who want to run their own cloudflare workers (sort of) and since nobody wants to run full node for that, they want a small runtime that just executes js/wasm in an isolated way. But I wonder why they don't tell me how I can be sure that this is safe or how it's safe. Surely I can't just trust them and it explicitly mentions that it still has file IO so clearly there is still work I need to do customize the isolation further. But then they don't show any info on that core usecase. But then that's probably because they don't really want you to use this to run it on your own, they are selling you on running things on their edge platform called "Wasmer Edge". So that's probably why this is so light on information.. the motivation isn't to get you to use this yourself, just to use this their hosted edge platform. But then I wonder why I wouldn't just use https://github.com/cloudflare/workerd which is also open source. Surely that is fast enough? If not then it should show some benchmarks?
- Cloudflare workers is adopting Ada URL parser
js-compute-runtime
-
What sorts of things would you consider to be “advanced” javascript concepts?
There are multiple JavaScript runtimes. SpiderMonkey is one example that has nothing to do with Node.js, see (js-compute-runtime)[https://github.com/fastly/js-compute-runtime].
- [AskJS] Has anybody implemented and compiled ServiceWorker specification to a standalone executable?
-
JavaScript support hits 1.0 milestone on Compute@Edge
We listened to the community feedback, filled in feature gaps, and addressed many bugs in the SDK. Not only that, we’ve also overhauled the SDK reference docs making it easier for you to know what’s supported and how to implement the features. All the Fastly specific features of the JS SDK now have interactive example applications in the documentation.
- Workerd: The Open Source Cloudflare Workers Runtime
-
Wasmtime 1.0
These are good questions! Here's some answers from the corner of the world I know best as a Wasmtime contributor at Fastly:
1. Spidermonkey.wasm is the basis of Fastly's JavaScript on Compute@Edge support. We have found it to be faster than QuickJS. The source code is here: https://github.com/fastly/js-compute-runtime.
2. Fastly Compute@Edge is built on wasmtime. You can develop web services for it in Rust, JS, and Go: https://developer.fastly.com/learning/compute/
3. Fastly's multi-tenant platform is closed source, but our single-tenant local development platform, which also uses wasmtime under the hood as well, is open source: https://github.com/fastly/viceroy. It isn't a big leap to make viceroy multi-tenant: Wasmtime provides everything you need, and all Viceroy would have to do is dispatch on e.g. HTTP host header to the correct tenant. Our multi-tenant platform is closed source because it is very specialized for use on Fastly's edge, not because the multi-tenant aspect is special.
- Fastly Compute Edge JavaScript Runtime
-
Debunking Cloudflare’s recent performance tests
btw. for what it's worth their javascript to wasm is opensource:
- https://github.com/fastly/js-compute-runtime
- https://github.com/tschneidereit/spidermonkey-wasi-embedding
and besides that it is slower than nodejs it is still plenty fast (no matter that it is not as fast as they want) btw. it's startup is faster than node. (maybe better pgo might help)
What are some alternatives?
lagon - Deploy Serverless Functions at the Edge. Current status: Alpha
quickjs-rs - Rust wrapper for the quickjs Javascript engine.
cloudflare-docs - Cloudflare’s documentation
javy - JS to WebAssembly toolchain
webusb - Connecting hardware to the web.
spidermonkey-wasi-embedding
windmill - Open-source developer platform to power your entire infra and turn scripts into webhooks, workflows and UIs. Fastest workflow engine (13x vs Airflow). Open-source alternative to Retool and Temporal.
landlord - Provides the ability to run multiple JVM based applications on the one JVM
fauna-schema-migrate - The Fauna Schema Migrate tool helps you set up Fauna resources as code and perform schema migrations.
spidermonkey-wasi-embedding
llrt - LLRT (Low Latency Runtime) is an experimental, lightweight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications.
now - Node on Web