falcon
WebGL-Fluid-Simulation
falcon | WebGL-Fluid-Simulation | |
---|---|---|
2 | 109 | |
925 | 14,269 | |
0.6% | - | |
7.8 | 0.0 | |
20 days ago | 6 months ago | |
Jupyter Notebook | JavaScript | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
falcon
- Goodbye, Node.js Buffer
-
Launch HN: Drifting in Space (YC W22) – A server process for every user
Good questions!
> Why do you need one process per user? / Wouldn't this "event loop" actually be more efficient that one user/process, as there would be less context switching cost from the OS?
We're particularly interested in apps that are often CPU-bound, so a traditional event-loop would be blocked for long periods of time. A typical solution is to put the work into a thread, so there would still be a context switch, albeit a smaller one.
The process-per-user approach makes the most sense when a significant amount of the data used by each user does not overlap with other users. VS Code (in client/server mode) is a good example of this -- the overhead of siloing each process is relatively low compared to the benefits it gives. We think more data-heavy apps will make the same trade-offs.
> Can I just keep a map of (connection, thread_id) on my server, and spawn one thread per user on my own server?
If you don't have to scale beyond one server, this approach works fine, but it makes scaling horizontally complicated because you suddenly can't just use a plain old load balancer. It's not just about routing requests to the right server; deciding which server to run the threads on becomes complicated because you ideally want to decide based on the server load of each. We started going down this path, realized we'd end up re-inventing Kubernetes, so decided to embrace it instead.
> Could I just load up my server with many cores, and give each user a SQLite database which runs each query in its own thread? This way a multi GB database would not be loaded into RAM, the query would filter it down to a result set.
If, for a particular use case, it's economical to keep the data ready in a database that supports the query pattern users will make, it's probably not a good fit for a session-lived backend. In database terms, where our architecture makes sense is when you need to create an index on a dataset (or subset of a dataset) during the runtime of an application. For example, if you have thousands of large parquet files in blob storage and you want a user to be able to load one and run [Falcon](https://github.com/vega/falcon)-type analysis on it.
WebGL-Fluid-Simulation
-
That is some extremely impressive water physics, especialy for a place you only visit once. How did they do it? (MAJOR SPOILERS FOR 4.2 WORLD QUEST)
It kinda reminds me of this fluid simulation website. The site is about a computer graphics technique that simulates the motion and appearance of fluids such as water, smoke or fire. You can use your mouse to move around the screen to see the simulation, also can change how the fluid simulation works by adjusting the sliders in the control panel next to it.
-
Goodbye, Node.js Buffer
Typed arrays are essential for web apps that use WebGL and WebGPU. Being able to send this typed data to run computations on the GPU in parallel can give you 1000x speed up.
You can see it in action on this WebGL fluid simulator[0] by PavelDoGreat.
[0] https://github.com/PavelDoGreat/WebGL-Fluid-Simulation
- WebGL Fluid Simulation
-
Water – Oimo.io
this one is much smoother: https://paveldogreat.github.io/WebGL-Fluid-Simulation/
previously: https://news.ycombinator.com/item?id=34422948
-
This thing is wild!
Not a video. WebGL Fluid is the website.
- The website has been found!
-
A lost website
Sure here it is
-
Saw this ad that made me have major feelings from childhood
Off the top of my head I'm thinking of WebGL Fluid Simulation
- Webgl Fluid Simulation
-
Here you go: Golden Beryl on a magnetic stirrer ... in slow mo!
Here is poor man's stirrer: https://paveldogreat.github.io/WebGL-Fluid-Simulation/
What are some alternatives?
stateroom - A lightweight framework for building WebSocket-based application backends.
react-fluid-animation - Fluid media animation for React powered by WebGL.
nodejs-polars - nodejs front-end of polars
Phaser - Phaser is a fun, free and fast 2D game framework for making HTML5 games for desktop and mobile web browsers, supporting Canvas and WebGL rendering. [Moved to: https://github.com/phaserjs/phaser]
streams - Streams Standard
lively - Free and open-source software that allows users to set animated desktop wallpapers and screensavers powered by WinUI 3.
proposal-zero-copy-arraybuffer-list - A proposal for zero-copy ArrayBuffer lists
portfolio-site
proposal-arraybuffer-base64 - TC39 proposal for Uint8Array<->base64/hex
react-native-gcanvas - react native canvas based on gpu opengl glsl GCanvas -- A lightweight cross-platform graphics rendering engine. (超轻量的跨平台图形引擎)
spawner - Session backend orchestrator for ambitious browser-based apps. [Moved to: https://github.com/drifting-in-space/plane]
BestBuy-GPU-Bot - BestBuy Bot is an Add to cart and Auto Checkout Bot. This auto buying bot can search the item repeatedly on the ITEM page using one keyword. Once the desired item is available it can add to cart and checkout very fast. This auto purchasing BestBuy Bot can work on Firefox Browser so it can run in all Operating Systems. It can run for multiple items simultaneously.