proposal-async-iterator-helpers
falcon
proposal-async-iterator-helpers | falcon | |
---|---|---|
2 | 2 | |
66 | 925 | |
- | 0.6% | |
3.4 | 7.8 | |
about 1 month ago | 15 days ago | |
HTML | Jupyter Notebook | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
proposal-async-iterator-helpers
- Goodbye, Node.js Buffer
-
Observable API Proposal
js does not need Observable API, it already had Async Iterables that soon will be enhanced with these helpers https://github.com/tc39/proposal-async-iterator-helpers that are similar to this proposal
falcon
- Goodbye, Node.js Buffer
-
Launch HN: Drifting in Space (YC W22) – A server process for every user
Good questions!
> Why do you need one process per user? / Wouldn't this "event loop" actually be more efficient that one user/process, as there would be less context switching cost from the OS?
We're particularly interested in apps that are often CPU-bound, so a traditional event-loop would be blocked for long periods of time. A typical solution is to put the work into a thread, so there would still be a context switch, albeit a smaller one.
The process-per-user approach makes the most sense when a significant amount of the data used by each user does not overlap with other users. VS Code (in client/server mode) is a good example of this -- the overhead of siloing each process is relatively low compared to the benefits it gives. We think more data-heavy apps will make the same trade-offs.
> Can I just keep a map of (connection, thread_id) on my server, and spawn one thread per user on my own server?
If you don't have to scale beyond one server, this approach works fine, but it makes scaling horizontally complicated because you suddenly can't just use a plain old load balancer. It's not just about routing requests to the right server; deciding which server to run the threads on becomes complicated because you ideally want to decide based on the server load of each. We started going down this path, realized we'd end up re-inventing Kubernetes, so decided to embrace it instead.
> Could I just load up my server with many cores, and give each user a SQLite database which runs each query in its own thread? This way a multi GB database would not be loaded into RAM, the query would filter it down to a result set.
If, for a particular use case, it's economical to keep the data ready in a database that supports the query pattern users will make, it's probably not a good fit for a session-lived backend. In database terms, where our architecture makes sense is when you need to create an index on a dataset (or subset of a dataset) during the runtime of an application. For example, if you have thousands of large parquet files in blob storage and you want a user to be able to load one and run [Falcon](https://github.com/vega/falcon)-type analysis on it.
What are some alternatives?
nodejs-polars - nodejs front-end of polars
stateroom - A lightweight framework for building WebSocket-based application backends.
streams - Streams Standard
soundpubsub
starfx - A modern approach to side-effect and state management for web apps.
proposal-zero-copy-arraybuffer-list - A proposal for zero-copy ArrayBuffer lists
proposal-arraybuffer-base64 - TC39 proposal for Uint8Array<->base64/hex
observable - Observable API proposal
spawner - Session backend orchestrator for ambitious browser-based apps. [Moved to: https://github.com/drifting-in-space/plane]