sqltorrent
duckdb
sqltorrent | duckdb | |
---|---|---|
5 | 52 | |
269 | 16,902 | |
1.1% | 4.5% | |
0.0 | 10.0 | |
about 8 years ago | about 7 hours ago | |
C | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sqltorrent
-
BTFS (BitTorrent Filesystem)
Or even better store data as an sqlite file that is full-text-search indexed. Then you can full-text search the torrent on demand: https://github.com/bittorrent/sqltorrent
- SQLite BitTorrent Vfs
-
How to circumvent Sci-Hub ISP block
"There was that project some guy posted a while back that used a combination of sqlite and partial downloads to enable searches on a database before it was downloaded all the way."
https://github.com/bittorrent/sqltorrent
- Hosting SQLite databases on GitHub Pages (or any static file hoster)
-
Distributed search engines using BitTorrent and SQLite
Interesting question. I looked at the source code to understand that.
SQLite knows where to look for when you open a SQLite database and you run a query, right? It just asks the underlying filesystem to provide N bytes starting from an offset using a C function, then it repeats the same operation on different portions of the file, it does its computation and everybody is happy.
The software relies on sqltorrent, which is a custom VFS for SQLite. That means that SQLite function to read data from a file stored in the filesystem is replaced by a custom function. Such custom code computes which Torrent block(s) should have the highest priority, by dividing the offset and the number of bytes that SQLite wants to read by the size of the torrent blocks. It is just a division.
See: https://github.com/bittorrent/sqltorrent/blob/master/sqltorr...
duckdb
- 🪄 DuckDB sql hack : get things SORTED w/ constraint CHECK
- DuckDB: Move to push-based execution model (2021)
-
DuckDB performance improvements with the latest release
I'm not sure if the fix is reassuring or not: https://github.com/duckdb/duckdb/pull/9411/files
-
Building a Distributed Data Warehouse Without Data Lakes
It's an interesting question!
The problem is that the data is spread everywhere - no choice about that. So with that in mind, how do you query that data? Today, the idea is that you HAVE to put it into a central location. With tools like Bacalhau[1] and DuckDB [2], you no longer have to - a single query can be sharded amongst all your data - EFFECTIVELY giving you a lot of what you want from a data lake.
It's not a replacement, but if you can do a few of these items WITHOUT moving the data, you will be able to see really significant cost and time savings.
[1] https://github.com/bacalhau-project/bacalhau
[2] https://github.com/duckdb/duckdb
- DuckDB 0.9.0
-
Push or Pull, is this a question?
[4] Switch to Push-Based Execution Model by Mytherin · Pull Request #2393 · duckdb/duckdb (github.com)
-
Show HN: Hydra 1.0 – open-source column-oriented Postgres
it depends on your query obviously.
In general, I did very deep benchmarking of pg, clickhouse and duckdb, and I sure didn't make stupid mistakes like this: https://news.ycombinator.com/item?id=36990831
My dataset has 50B rows and 2tb of data, and I think columnar dbs are very overhiped and I chose pg because:
- pg performance is acceptable, maybe 2-3x times slower than clickhouse and duckdb on some queries if pg is configured correctly and run on compressed storage
- clickhouse and duckdb start falling apart very fast because they specialized on very narrow type of queries: https://github.com/ClickHouse/ClickHouse/issues/47520 https://github.com/ClickHouse/ClickHouse/issues/47521 https://github.com/duckdb/duckdb/discussions/6696
-
🦆 Effortless Data Quality w/duckdb on GitHub ♾️
This action installs duckdb with the version provided in input.
-
Using SQL inside Python pipelines with Duckdb, Glaredb (and others?)
Duckdb: https://github.com/duckdb/duckdb - seems pretty popular, been keeping an eye on this for close to a year now.
-
CSV or Parquet File Format
The Parquet-Go library is very complex, not yet success to use it. So I ask whether DuckDB can provide API https://github.com/duckdb/duckdb/issues/7776
What are some alternatives?
sql.js-httpvfs - Hosting read-only SQLite databases on static file hosters like Github Pages
ClickHouse - ClickHouse® is a free analytics DBMS for big data
torrent-net - Distributed search engines using BitTorrent and SQLite
sqlite-worker - A simple, and persistent, SQLite database for Web and Workers.
ipfs - Peer-to-peer hypermedia protocol
datasette - An open source multi-tool for exploring and publishing data
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
IPSQL - InterPlanetary SQL
metabase-clickhouse-driver - ClickHouse database driver for the Metabase business intelligence front-end
apsw - Another Python SQLite wrapper
datafusion - Apache DataFusion SQL Query Engine