llrt
workerd
llrt | workerd | |
---|---|---|
14 | 41 | |
8,167 | 6,327 | |
0.9% | 1.6% | |
9.7 | 9.9 | |
4 days ago | 1 day ago | |
JavaScript | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llrt
-
Show HN: Self-Host Next.js in Production
Any plans to add support for? https://github.com/awslabs/llrt
It would also be nice to have a V8/deno/bun based edge hosting option that supports the Next.js edge and middleware code splitting. That's the missing piece for most homebrew "edge" setups. Production CDNs like Clouflare and Supabase all offer this.
-
Everything Suffers from Cold Starts
Vlad Ionescu: Scaling containers on AWS in 2022 GitHub: awslabs/llrt AWS Documentation: Understanding the Lambda execution environment Amazon Science: How AWS's Firecracker virtual machines work Lumigo GitHub: MiddyJS
-
Porffor: A from-scratch experimental ahead-of-time JS engine
Its refreshing to see all the various JS engines that are out there for various usecases.
I have been working on providing quickjs with more node compatible API through llrt [1] for embedding into applications for plugins.
[1] https://github.com/awslabs/llrt
-
[Lab] AWS Lambda LLRT vs Node.js
AWS has open-sourced its JavaScript runtime, called LLRT (Low Latency Runtime), an experimental, lightweight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications.
-
Unlocking Next-Gen Serverless Performance: A Deep Dive into AWS LLRT
FROM --platform=arm64 busybox WORKDIR /var/task/ COPY app.mjs ./ ADD https://github.com/awslabs/llrt/releases/latest/download/llrt-container-arm64 /usr/bin/llrt RUN chmod +x /usr/bin/llrt ENV LAMBDA_HANDLER "app.handler" CMD [ "llrt" ]
-
Is AWS Lambda Cold Start Still an Issue?
Let’s get the simplest use case out of the way: cases where the cold starts are so fast that it’s not an issue for you. That’s usually the case for function that use runtimes such as C++, Go, Rust, and LLRT. However, you must follow the best practices and optimizations in every runtime to maintain a low impact cold start.
-
JavaScript News, Updates, and Tutorials: February 2024 Edition
But compared to other runtimes, LLRT is not so good in terms of performance when it comes to dealing with large data processing, Monte Carlo simulations, or performing tasks with a large number of iterations. The AWS team says that it is best suited for working with smaller Serverless functions dedicated to tasks such as data transformation, real-time processing, AWS service integrations, authorization, validation, etc. Visit the GitHub repository of this project to learn more information.
- FLaNK Stack 26 February 2024
-
People Matter more than Technology when Building Serverless Applications
And lastly, lean into your cloud vendor. Stop trying to build a better mouse trap. Advances in technology are happening all the time. The speed of AWS' Lambda has been rapidly improving over the past couple of years with the launch of things like SnapStart and LLRT
- Hono v4.0.0
workerd
- Wrapping My Mind Around Node.js Runtimes
-
Edge Scripting: Build and run applications at the edge
WorkerD isn't anywhere near a "cutdown version of Chromium," it is an incredible platform with years of engineering put into it, from some of the people behind very similar and successful products (GAE, Protocol Buffers, to name some).
WorkerD is open source: https://github.com/cloudflare/workerd
I personally am not a fan of Deno because of how it split the Node JS ecosystem, so that is not a benefit in my eyes. Of course, Workers can run Rust.
Nothing you said here necessitates an API difference.
-
Our container platform is in production. It has GPUs. Here's an early look
You can't really run the Worker code without modifications somewhere else afaik (unless you're using something like Hono with an adapter). And for most use cases, you're not going to be using Workers without KV, DO, etc.
I've hit a bunch of issues and limitations with Wrangler over the years.
Eg:
https://github.com/cloudflare/workers-sdk/issues/2964
https://github.com/cloudflare/workerd/issues/1897
-
How To Self-Host Cloudflare
Workerd is a JavaScript & WebAssembly-based runtime that powers Cloudflare Workers and other related technologies. You can think of it like the Node.js runtime used to execute JavaScript files. Workerd has its differences from Node.js, however, you can self-host it on any machine.
-
Cloudflare acquires PartyKit to allow developers to build real-time multi-user
Standards bodies only standardize things after they've been proven to work. You can't standardize a new idea before offering it to the market. It's hard enough to get just one vendor to experiment with an idea (it literally took me years to convince everyone inside Cloudflare that we should build Durable Objects). Getting N competing vendors to agree on it -- before anything has been proven in the market -- is simply not possible.
But the Durable Objects API is not complicated and there's nothing stopping competing platforms from building a compatible product if they want. Much of the implementation is open source, even. In fact, if you build an app on DO but decide you don't want to host it on Cloudflare, you can self-host it on workerd:
https://github.com/cloudflare/workerd
-
Python Cloudflare Workers
In any case, I welcome this initiative with my open hands and look forward all the cool apps that people will now build with this!
[1] https://pyodide.org/
[2] https://github.com/cloudflare/workerd/blob/main/docs/pyodide...
[3] https://github.com/cloudflare/workerd/pull/1875
-
LLRT: A low-latency JavaScript runtime from AWS
For ref:
- https://blog.cloudflare.com/workerd-open-source-workers-runt...
- https://github.com/cloudflare/workerd
-
A list of JavaScript engines, runtimes, interpreters
workerd
-
WinterJS
I think this is for people who want to run their own cloudflare workers (sort of) and since nobody wants to run full node for that, they want a small runtime that just executes js/wasm in an isolated way. But I wonder why they don't tell me how I can be sure that this is safe or how it's safe. Surely I can't just trust them and it explicitly mentions that it still has file IO so clearly there is still work I need to do customize the isolation further. But then they don't show any info on that core usecase. But then that's probably because they don't really want you to use this to run it on your own, they are selling you on running things on their edge platform called "Wasmer Edge". So that's probably why this is so light on information.. the motivation isn't to get you to use this yourself, just to use this their hosted edge platform. But then I wonder why I wouldn't just use https://github.com/cloudflare/workerd which is also open source. Surely that is fast enough? If not then it should show some benchmarks?
- Cloudflare workers is adopting Ada URL parser
What are some alternatives?
winterjs - Winter is coming... ❄️
cloudflare-docs - Cloudflare’s documentation
pljs - PLJS - Javascript Language Plugin for PostreSQL
js-compute-runtime - JavaScript SDK and runtime for building Fastly Compute applications
mud-pi - A simple MUD server in Python, for teaching purposes, which could be run on a Raspberry Pi
webusb - Connecting hardware to the web.
hermes - A JavaScript engine optimized for running React Native.
fauna-schema-migrate - The Fauna Schema Migrate tool helps you set up Fauna resources as code and perform schema migrations.
winterjs
lagon - Deploy Serverless Functions at the Edge. Current status: Alpha
sst - Build full-stack apps on your own infrastructure.
windmill - Open-source developer platform to power your entire infra and turn scripts into webhooks, workflows and UIs. Fastest workflow engine (13x vs Airflow). Open-source alternative to Retool and Temporal.