
-
btw. for what it's worth their javascript to wasm is opensource:
- https://github.com/fastly/js-compute-runtime
- https://github.com/tschneidereit/spidermonkey-wasi-embedding
and besides that it is slower than nodejs it is still plenty fast (no matter that it is not as fast as they want) btw. it's startup is faster than node. (maybe better pgo might help)
-
Nutrient
Nutrient - The #1 PDF SDK Library. Bad PDFs = bad UX. Slow load times, broken annotations, clunky UX frustrates users. Nutrient’s PDF SDKs gives seamless document experiences, fast rendering, annotations, real-time collaboration, 100+ features. Used by 10K+ devs, serving ~half a billion users worldwide. Explore the SDK for free.
-
btw. for what it's worth their javascript to wasm is opensource:
- https://github.com/fastly/js-compute-runtime
- https://github.com/tschneidereit/spidermonkey-wasi-embedding
and besides that it is slower than nodejs it is still plenty fast (no matter that it is not as fast as they want) btw. it's startup is faster than node. (maybe better pgo might help)
-
-
I like your ideas, but they seem difficult to enforce. It assumes good faith on all sides. One of the biggest complaints about AI/ML research results: It is frequently hard/impossible to replicate the results.
One idea: The edge competitors can create a public (SourceHut?) project that runs various daily tests against themselves. This would similar to JSON library benchmarks. [1] Then allow each competitors to continuously tweak there settings to accomplish the task in the shortest amount of time.
Also: It would be nice to see a cost analysis. For years, IBM's DB2 was insanely fast if you could afford to pay outrageous hardware, software license, and consulting costs. I'm not in the edge business, but I guess there are some operators where you can just pay a lot more and get better performance -- if you really need it.
[1] https://github.com/miloyip/nativejson-benchmark
-
> For such a small program ... the performance difference can be reduced significantly.
I think you're wrong. Small programs don't necessarily minimize the difference between languages. Small programs often exacerbate them. Often for weird, idiosyncratic reasons. Especially if you're measuring time-to-first-byte, and not letting the VM warm up.
For example, in this case I wouldn't be surprised if most of the CPU time was going into node + v8's startup, rather than executing the user's javascript at all.
Look at the plaintext techempower benchmarks[1]. These benchmarks test tiny amounts of code - the programs only have to respond with "hello world" HTTP responses. The best performers are hitting hardware limits. If your theory that small program = small relative cost of using javascript was true, nodejs's performance should be similar to rust / C++. It is not - node is only 13% of the performance of the top performers. Weirdly "justjs" (another v8 based runtime) is ~5x faster than nodejs on this test. The reason probably has nothing to do with javascript at all, and is because justjs has less overhead and talks to the OS in a more efficient way. (It probably uses io_ring.)
But maybe this is evidence you're technically correct. The performance differences can be reduced. But we have every reason to assume nodejs will have way more overhead than rust at executing the same code. So I agree with other posters. A benchmark showing rust on fastly is faster than javascript on cloudflare tells us nothing about the underlying hosting providers.
https://www.techempower.com/benchmarks/#section=data-r20&hw=...
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.