Our great sponsors
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
fatcow-icons
FatCow icons v3.9.2 (I was given a permission to host these on GitHub by FatCow support.)
Using SQLite compiled to Wasm in order to push computation closer to the user is a powerful idea. I'm partial to the method of serving up the SQLite files directly and building applications around the SQL.js library [1], which includes math extensions and the ability to embed Javascript udfs. I wrote a data visualization using SQLite as the data store [2] and can attest that it's refreshing to use SQL inside of a static website.
[1] https://github.com/sql-js/sql.js
edge-sql [1] allows arbitrary SQLite queries to be executed over an immutable external data set. The demo uses a Forex data set stored in Workers KV.
Client issued arbitrary queries is one of the use cases for GraphQL and publishing immutable data sets on the web is the main use case for Simon Wilson’s Datasette [2].
In-memory SQLite compiled to WASM works in the browser and Node.js too. In the future, we can expect proper ACID operations on any WASM runtime that supports fsync in WASI [3], a POSIX-like API.
[1] https://github.com/lspgn/edge-sql
[2] https://datasette.io/
[3] https://wasi.dev/
Hm, interesting. I made this, if you want to take a look: https://github.com/TomasHubelbauer/sqlite-javascript. You can test it using the Demo link in the readme and then use the prefilled value in the Load from URL prompt. Based on this I find your idea to be very doable. I didn't do it yet, but while working on this I had async fetching of pages using Range requests on my mind. Shame I didn't get to it when I was working on this project, could be easy to put together a demo. Similar to this I made https://github.com/TomasHubelbauer/fatcow-icons which parses a big ZIP file piece-wise on demand. As you scroll, new parts of the archive are fetched and icons extracted so you don't need to download the whole thing.
I've also tried full-text-search in worker by pre-indexing the content, works very fast even with a JS engine - less than 5ms to make a search in 5MB of text.
It runs out of CPU-time at 6MB of text though.
There's someone that made a WASM for the same thing too, it's definitely faster and can handle a bit more text.
https://github.com/wilsonzlin/edgesearch
Someone has built it already for bittorrent, maybe you can reuse some parts: https://github.com/lmatteis/torrent-net
Even with an artificially slow connection it seems to be reactive enough. Actually fetching pages on-demand can only be better for the bandwidth, the real issue is going to be latency
Dqlite might be interesting to look at. They used the Sqlite VFS to front their raft backend. https://github.com/canonical/dqlite