Our great sponsors
-
I'm not sure if it would help in your case, but could you process all categories at once with a larger SQL query?
If so, DuckDB can process bulk queries about 20x faster than SQLite per CPU core because it is vectorized and column oriented. Then with multiple cores you can easily reach 100x SQLite speed. DuckDB has node bindings and is an in-process DB like SQLite.
If reading from disk is your bottleneck, I would recommend storing your data in compressed parquet files and reading them with DuckDB's parquet reader.
One drawback is that indexes are not persistent to the filesystem in DuckDB yet, but full table scans are much faster than SQLite since it is columnar.
-
I have recently started using https://github.com/WebReflection/sqlite-worker which works pretty well
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
Related posts
- 🪄 DuckDB sql hack : get things SORTED w/ constraint CHECK
- We Built a 19 PiB Logging Platform with ClickHouse and Saved Millions
- Variant in Apache Doris 2.1.0: a new data type 8 times faster than JSON for semi-structured data analysis
- 42.parquet – A Zip Bomb for the Big Data Age
- DuckDB: Move to push-based execution model (2021)