go-duckdb
duckdb
Our great sponsors
go-duckdb | duckdb | |
---|---|---|
4 | 52 | |
488 | 16,576 | |
- | 10.7% | |
8.2 | 10.0 | |
19 days ago | 3 days ago | |
Go | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
go-duckdb
-
Embeddable Database for Go which have Date/Time type
DuckDB also has date functions and Go bindings
-
Range Joins in DuckDB
I've been beating my head trying to get duckdb to statically link into a Go program (I'm neither an expert with cgo nor ld). If anyone else has been able to do this I'd love to see your build steps.
https://github.com/marcboeker/go-duckdb produces a non-static binary by default.
-
Friendlier SQL with DuckDB
Here is a solved Github Issue related to CGO for the Go bindings! If you have another issue, please feel free to post it on their Github page!
https://github.com/marcboeker/go-duckdb/issues/4
-
Dsq: Commandline tool for running SQL queries against JSON, CSV, Parquet, etc.
Yeah frankly the q benchmark isn't the best even though dsq compares favorably in it. It isn't well documented and exercises a very limited amount of functionality and isn't very rigorous from what I can see. That said, the caching q does is likely very solid (and not something dsq does).
The biggest risk I think with octosql (and cube2222 is here somewhere to disagree with me if I'm wrong) is that they have their own entire SQL engine whereas textql, q and dsq use SQLite. But q is also in Python whereas textql, octosql, and dsq are in Go.
In the next few weeks I'll be posting some benchmarks that I hope are a little fairer (or at least well-documented and reproducible). Though of course it would be appropriate to have independent benchmarks too since I now have a dog in the fight.
On a tangent, once the go-duckdb binding [0] matures I'd love to offer duckdb as an alternative engine flag within dsq (and DataStation). Would be neat to see.
[0] https://github.com/marcboeker/go-duckdb
duckdb
- 🪄 DuckDB sql hack : get things SORTED w/ constraint CHECK
- DuckDB: Move to push-based execution model (2021)
-
DuckDB performance improvements with the latest release
I'm not sure if the fix is reassuring or not: https://github.com/duckdb/duckdb/pull/9411/files
-
Building a Distributed Data Warehouse Without Data Lakes
It's an interesting question!
The problem is that the data is spread everywhere - no choice about that. So with that in mind, how do you query that data? Today, the idea is that you HAVE to put it into a central location. With tools like Bacalhau[1] and DuckDB [2], you no longer have to - a single query can be sharded amongst all your data - EFFECTIVELY giving you a lot of what you want from a data lake.
It's not a replacement, but if you can do a few of these items WITHOUT moving the data, you will be able to see really significant cost and time savings.
[1] https://github.com/bacalhau-project/bacalhau
[2] https://github.com/duckdb/duckdb
- DuckDB 0.9.0
-
Push or Pull, is this a question?
[4] Switch to Push-Based Execution Model by Mytherin · Pull Request #2393 · duckdb/duckdb (github.com)
-
Show HN: Hydra 1.0 – open-source column-oriented Postgres
it depends on your query obviously.
In general, I did very deep benchmarking of pg, clickhouse and duckdb, and I sure didn't make stupid mistakes like this: https://news.ycombinator.com/item?id=36990831
My dataset has 50B rows and 2tb of data, and I think columnar dbs are very overhiped and I chose pg because:
- pg performance is acceptable, maybe 2-3x times slower than clickhouse and duckdb on some queries if pg is configured correctly and run on compressed storage
- clickhouse and duckdb start falling apart very fast because they specialized on very narrow type of queries: https://github.com/ClickHouse/ClickHouse/issues/47520 https://github.com/ClickHouse/ClickHouse/issues/47521 https://github.com/duckdb/duckdb/discussions/6696
-
🦆 Effortless Data Quality w/duckdb on GitHub ♾️
This action installs duckdb with the version provided in input.
-
Using SQL inside Python pipelines with Duckdb, Glaredb (and others?)
Duckdb: https://github.com/duckdb/duckdb - seems pretty popular, been keeping an eye on this for close to a year now.
-
CSV or Parquet File Format
The Parquet-Go library is very complex, not yet success to use it. So I ask whether DuckDB can provide API https://github.com/duckdb/duckdb/issues/7776
What are some alternatives?
dsq - Commandline tool for running SQL queries against JSON, CSV, Excel, Parquet, and more.
ClickHouse - ClickHouse® is a free analytics DBMS for big data
textql - Execute SQL against structured text like CSV or TSV
sqlite-worker - A simple, and persistent, SQLite database for Web and Workers.
roapi - Create full-fledged APIs for slowly moving datasets without writing a single line of code.
datasette - An open source multi-tool for exploring and publishing data
better-sqlite3 - The fastest and simplest library for SQLite3 in Node.js. [Moved to: https://github.com/WiseLibs/better-sqlite3]
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
postgres_scanner
metabase-clickhouse-driver - ClickHouse database driver for the Metabase business intelligence front-end
q - q - Run SQL directly on delimited files and multi-file sqlite databases
arrow-datafusion - Apache DataFusion SQL Query Engine