nba-monte-carlo
duckdb
nba-monte-carlo | duckdb | |
---|---|---|
3 | 52 | |
345 | 16,749 | |
- | 4.5% | |
9.4 | 10.0 | |
10 days ago | 4 days ago | |
Python | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nba-monte-carlo
- Monte Carlo simulation of the NBA season (meltano, dbt, DuckDB, evidence)
-
Evidence – Business Intelligence as Code
We have support for duckdb (and CSVs and Parquet through duckdb). We don't support python, but some people have also told us they have used evidence as the front-end for a python project - used python to do data transformation and calculations, then dumped the results into a duckdb file in an evidence project and built the visuals and narrative in evidence.
"Containerized" approaches with evidence are also quite interesting - lets you combine several tools and use evidence as the last mile. Here's a great example: https://github.com/matsonj/nba-monte-carlo
- DuckDB: Querying JSON files as if they were tables
duckdb
- 🪄 DuckDB sql hack : get things SORTED w/ constraint CHECK
- DuckDB: Move to push-based execution model (2021)
-
DuckDB performance improvements with the latest release
I'm not sure if the fix is reassuring or not: https://github.com/duckdb/duckdb/pull/9411/files
-
Building a Distributed Data Warehouse Without Data Lakes
It's an interesting question!
The problem is that the data is spread everywhere - no choice about that. So with that in mind, how do you query that data? Today, the idea is that you HAVE to put it into a central location. With tools like Bacalhau[1] and DuckDB [2], you no longer have to - a single query can be sharded amongst all your data - EFFECTIVELY giving you a lot of what you want from a data lake.
It's not a replacement, but if you can do a few of these items WITHOUT moving the data, you will be able to see really significant cost and time savings.
[1] https://github.com/bacalhau-project/bacalhau
[2] https://github.com/duckdb/duckdb
- DuckDB 0.9.0
-
Push or Pull, is this a question?
[4] Switch to Push-Based Execution Model by Mytherin · Pull Request #2393 · duckdb/duckdb (github.com)
-
Show HN: Hydra 1.0 – open-source column-oriented Postgres
it depends on your query obviously.
In general, I did very deep benchmarking of pg, clickhouse and duckdb, and I sure didn't make stupid mistakes like this: https://news.ycombinator.com/item?id=36990831
My dataset has 50B rows and 2tb of data, and I think columnar dbs are very overhiped and I chose pg because:
- pg performance is acceptable, maybe 2-3x times slower than clickhouse and duckdb on some queries if pg is configured correctly and run on compressed storage
- clickhouse and duckdb start falling apart very fast because they specialized on very narrow type of queries: https://github.com/ClickHouse/ClickHouse/issues/47520 https://github.com/ClickHouse/ClickHouse/issues/47521 https://github.com/duckdb/duckdb/discussions/6696
-
🦆 Effortless Data Quality w/duckdb on GitHub ♾️
This action installs duckdb with the version provided in input.
-
Using SQL inside Python pipelines with Duckdb, Glaredb (and others?)
Duckdb: https://github.com/duckdb/duckdb - seems pretty popular, been keeping an eye on this for close to a year now.
-
CSV or Parquet File Format
The Parquet-Go library is very complex, not yet success to use it. So I ask whether DuckDB can provide API https://github.com/duckdb/duckdb/issues/7776
What are some alternatives?
jupysql - Better SQL in Jupyter. 📊
ClickHouse - ClickHouse® is a free analytics DBMS for big data
ducker
sqlite-worker - A simple, and persistent, SQLite database for Web and Workers.
Blazer - Business intelligence made simple
datasette - An open source multi-tool for exploring and publishing data
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
hanakotoba - Exploring 花言葉 in Japanese and other literary corpora
metabase-clickhouse-driver - ClickHouse database driver for the Metabase business intelligence front-end
datafusion - Apache DataFusion SQL Query Engine
LevelDB - LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.