bacalhau
duckdb
bacalhau | duckdb | |
---|---|---|
12 | 52 | |
620 | 17,221 | |
3.9% | 7.1% | |
9.8 | 10.0 | |
5 days ago | 4 days ago | |
Go | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bacalhau
-
Deno Cron
This is really interesting - we’ve tried really hard to solve some of these with Bacalhau[1] - a much simpler distributed compute platform. Would love your feedback!
[1] https://github.com/bacalhau-project/bacalhau
Disclosure: I confounded Bacalhau
-
Building a Distributed Data Warehouse Without Data Lakes
It's an interesting question!
The problem is that the data is spread everywhere - no choice about that. So with that in mind, how do you query that data? Today, the idea is that you HAVE to put it into a central location. With tools like Bacalhau[1] and DuckDB [2], you no longer have to - a single query can be sharded amongst all your data - EFFECTIVELY giving you a lot of what you want from a data lake.
It's not a replacement, but if you can do a few of these items WITHOUT moving the data, you will be able to see really significant cost and time savings.
[1] https://github.com/bacalhau-project/bacalhau
[2] https://github.com/duckdb/duckdb
- Bacalhau: Compute over Data framework for public, transparent, verifiable work
-
Ask HN: What tech is under the radar with all attention on ChatGPT etc.
Very selfishly - distributed compute. Not decentralized, distributed.
Compute and data are being created and run everywhere, we need platforms that understand how to use it and get insights without (or before) moving it.
Our contribution: https://github.com/bacalhau-project/bacalhau (think Kubernetes but built for the distributed world).
Disclosure: I co-founded the Bacalhau Project
- Waterlily.ai Launches to Make AI Art More Accessible and Equitable
-
Building a Distributed World of WebAssembly with Bacalhau
Thank you so much for the feedback. Yeah, we REALLY do want to figure out a better naming/reference scheme. Do you have anything you've seen you really like?
Disclosure: I work on Bacalhau[1]
https://github.com/bacalhau-project/bacalhau
- What Is Bacalhau?
- GitHub
- The Bacalhau Vision – A Distributed Compute over Data Platform
duckdb
- 🪄 DuckDB sql hack : get things SORTED w/ constraint CHECK
- DuckDB: Move to push-based execution model (2021)
-
DuckDB performance improvements with the latest release
I'm not sure if the fix is reassuring or not: https://github.com/duckdb/duckdb/pull/9411/files
-
Building a Distributed Data Warehouse Without Data Lakes
It's an interesting question!
The problem is that the data is spread everywhere - no choice about that. So with that in mind, how do you query that data? Today, the idea is that you HAVE to put it into a central location. With tools like Bacalhau[1] and DuckDB [2], you no longer have to - a single query can be sharded amongst all your data - EFFECTIVELY giving you a lot of what you want from a data lake.
It's not a replacement, but if you can do a few of these items WITHOUT moving the data, you will be able to see really significant cost and time savings.
[1] https://github.com/bacalhau-project/bacalhau
[2] https://github.com/duckdb/duckdb
- DuckDB 0.9.0
-
Push or Pull, is this a question?
[4] Switch to Push-Based Execution Model by Mytherin · Pull Request #2393 · duckdb/duckdb (github.com)
-
Show HN: Hydra 1.0 – open-source column-oriented Postgres
it depends on your query obviously.
In general, I did very deep benchmarking of pg, clickhouse and duckdb, and I sure didn't make stupid mistakes like this: https://news.ycombinator.com/item?id=36990831
My dataset has 50B rows and 2tb of data, and I think columnar dbs are very overhiped and I chose pg because:
- pg performance is acceptable, maybe 2-3x times slower than clickhouse and duckdb on some queries if pg is configured correctly and run on compressed storage
- clickhouse and duckdb start falling apart very fast because they specialized on very narrow type of queries: https://github.com/ClickHouse/ClickHouse/issues/47520 https://github.com/ClickHouse/ClickHouse/issues/47521 https://github.com/duckdb/duckdb/discussions/6696
-
🦆 Effortless Data Quality w/duckdb on GitHub ♾️
This action installs duckdb with the version provided in input.
-
Using SQL inside Python pipelines with Duckdb, Glaredb (and others?)
Duckdb: https://github.com/duckdb/duckdb - seems pretty popular, been keeping an eye on this for close to a year now.
-
CSV or Parquet File Format
The Parquet-Go library is very complex, not yet success to use it. So I ask whether DuckDB can provide API https://github.com/duckdb/duckdb/issues/7776
What are some alternatives?
duckdb-wasm - WebAssembly version of DuckDB
ClickHouse - ClickHouse® is a free analytics DBMS for big data
ch32v003fun - An open source software development stack for the CH32V003 10¢ 48 MHz RISC-V Microcontroller - as well as many other chips within the ch32v/x line.
sqlite-worker - A simple, and persistent, SQLite database for Web and Workers.
Waterlily - A project bringing ethics back to AI
datasette - An open source multi-tool for exploring and publishing data
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
web-llm - Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
metabase-clickhouse-driver - ClickHouse database driver for the Metabase business intelligence front-end
JsCron - Javascript cron parser, schedule date generator
datafusion - Apache DataFusion SQL Query Engine