duckdb-wasm
duckdb
duckdb-wasm | duckdb | |
---|---|---|
11 | 52 | |
924 | 16,749 | |
5.2% | 4.5% | |
9.5 | 10.0 | |
4 days ago | 4 days ago | |
C++ | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
duckdb-wasm
-
Parquet-WASM: Rust-based WebAssembly bindings to read and write Parquet data
i think duckdb-wasm is closer to 6MB over wire, but ~36MB once decompressed. (see net panel when loading https://shell.duckdb.org/)
the decompressed size should be okay since it's not the same as parsing and JITing 36MB of JS.
- 42.parquet – A Zip Bomb for the Big Data Age
-
Show HN: Open-source, browser-local data exploration using DuckDB-WASM and PRQL
Hey HN! We’ve built Pretzel, an open-source data exploration and visualization tool that runs fully in the browser and can handle large files (200 MB CSV on my 8gb MacBook air is snappy). It’s also reactive - so if, for example, you change a filter, all the data transform blocks after it re-evaluate automatically. You can try it here: https://pretzelai.github.io/ (static hosted webpage) or see a demo video here: https://www.youtube.com/watch?v=73wNEun_L7w
You can play with the demo CSV that’s pre-loaded (GitHub data of text-editor adjacent projects) or upload your own CSV/XLSX file. The tool runs fully in-browser—you can disconnect from the internet once the website loads—so feel free to use sensitive data if you like.
Here’s how it works: You upload a CSV file and then, explore your data as a series of successive data transforms and plots. For example, you might: (1) Remove some columns; (2) Apply some filters (remove nulls, remove outliers, restrict time range etc); (3) Do a pivot (i.e, a group-by but fancier); (4) Plot a chart; (5) Download the chart and the the transformed data. See screenshot: https://imgur.com/a/qO4yURI
In the UI, each transform step appears as a “Block”. You can always see the result of the full transform in a table on the right. The transform blocks are editable - for instance in the example above, you can go to step 2, change some filters and the reactivity will take care of re-computing all the cells that follow, including the charts.
We wanted Pretzel to run locally in the browser and be extremely performant on large files. So, we parse CSVs with the fastest CSV parser (uDSV: https://github.com/leeoniya/uDSV) and use DuckDB-Wasm (https://github.com/duckdb/duckdb-wasm) to do all the heavy lifting of processing the data. We also wanted to allow for chained data transformations where each new block operates on the result of the previous block. For this, we’re using PRQL (https://prql-lang.org/) since it maps 1-1 with chained data transform blocks - each block maps to a chunk of PRQL which when combined, describes the full data transform chain. (PRQL doesn’t support DuckDB’s Pivot statement though so we had to make some CTE based hacks).
There’s also an AI block: This is the only (optional) feature that requires an internet connection but we’re working on adding local model support via Ollama. For now, you can use your own OpenAI API key or use an AI server we provide (GPT4 proxy; it’s loaded with a few credits), specify a transform in plain english and get back the SQL for the transform which you can edit.
Our roadmap includes allowing API calls to create new columns; support for an SQL block with nice autocomplete features, and a Python block (using Pyodide to run Python in the browser) on the results of the data transforms, much like a jupyter notebook.
There’s two of us and we’ve only spent about a week coding this and fixing major bugs so there are still some bugs to iron out. We’d love for you to try this and to get your feedback!
- DuckDB-WASM: WebAssembly Version of DuckDB
-
Show HN: DuckDB-WASM, execute queries in a browser, and share them as links
Amazing, I was eagerly waiting for this one. Loading extensions in previous DuckDB-WASM releases didn't work seamlessly. Looks like now it's the case :D
ref: https://github.com/duckdb/duckdb-wasm/issues/1542#issuecomme...
Thanks!!
-
DuckDB 0.9.0
Btw, it's already happening:
Go to https://shell.duckdb.org, and type
-
Does anyone else hate Pandas?
I like Pandas, but you will love duckdb, which is solving this exact problem: https://duckdb.org/; https://shell.duckdb.org/
-
[Question] Using DuckDB to connect to (external/cloud) Postgres DB
There's also https://shell.duckdb.org/ for playing around.
- Ask HN: What tech is under the radar with all attention on ChatGPT etc.
-
My first Rust project: Xlsx-wasm-parser. A WebAssembly-wrapper around the Calamine crate to bring Blazingly Fast Excel deserialization to the Browser and NodeJS.
I know xls != csv, but would be cool to compare against https://github.com/duckdb/duckdb-wasm as well
duckdb
- 🪄 DuckDB sql hack : get things SORTED w/ constraint CHECK
- DuckDB: Move to push-based execution model (2021)
-
DuckDB performance improvements with the latest release
I'm not sure if the fix is reassuring or not: https://github.com/duckdb/duckdb/pull/9411/files
-
Building a Distributed Data Warehouse Without Data Lakes
It's an interesting question!
The problem is that the data is spread everywhere - no choice about that. So with that in mind, how do you query that data? Today, the idea is that you HAVE to put it into a central location. With tools like Bacalhau[1] and DuckDB [2], you no longer have to - a single query can be sharded amongst all your data - EFFECTIVELY giving you a lot of what you want from a data lake.
It's not a replacement, but if you can do a few of these items WITHOUT moving the data, you will be able to see really significant cost and time savings.
[1] https://github.com/bacalhau-project/bacalhau
[2] https://github.com/duckdb/duckdb
- DuckDB 0.9.0
-
Push or Pull, is this a question?
[4] Switch to Push-Based Execution Model by Mytherin · Pull Request #2393 · duckdb/duckdb (github.com)
-
Show HN: Hydra 1.0 – open-source column-oriented Postgres
it depends on your query obviously.
In general, I did very deep benchmarking of pg, clickhouse and duckdb, and I sure didn't make stupid mistakes like this: https://news.ycombinator.com/item?id=36990831
My dataset has 50B rows and 2tb of data, and I think columnar dbs are very overhiped and I chose pg because:
- pg performance is acceptable, maybe 2-3x times slower than clickhouse and duckdb on some queries if pg is configured correctly and run on compressed storage
- clickhouse and duckdb start falling apart very fast because they specialized on very narrow type of queries: https://github.com/ClickHouse/ClickHouse/issues/47520 https://github.com/ClickHouse/ClickHouse/issues/47521 https://github.com/duckdb/duckdb/discussions/6696
-
🦆 Effortless Data Quality w/duckdb on GitHub ♾️
This action installs duckdb with the version provided in input.
-
Using SQL inside Python pipelines with Duckdb, Glaredb (and others?)
Duckdb: https://github.com/duckdb/duckdb - seems pretty popular, been keeping an eye on this for close to a year now.
-
CSV or Parquet File Format
The Parquet-Go library is very complex, not yet success to use it. So I ask whether DuckDB can provide API https://github.com/duckdb/duckdb/issues/7776
What are some alternatives?
web-llm - Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
ClickHouse - ClickHouse® is a free analytics DBMS for big data
mutable - A Database System for Research and Fast Prototyping
sqlite-worker - A simple, and persistent, SQLite database for Web and Workers.
chdb - chDB is an embedded OLAP SQL Engine 🚀 powered by ClickHouse
datasette - An open source multi-tool for exploring and publishing data
ch32v003fun - An open source software development stack for the CH32V003 10¢ 48 MHz RISC-V Microcontroller - as well as many other chips within the ch32v/x line.
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
bacalhau - Compute over Data framework for public, transparent, and optionally verifiable computation
metabase-clickhouse-driver - ClickHouse database driver for the Metabase business intelligence front-end
xlsx-wasm-parser - A WebAssembly wrapper over the Rust Calaminecrate, bringing Blazingly Fast (🔥) XLSX deserialization to the browser and Node.
datafusion - Apache DataFusion SQL Query Engine