sqlite_scanner
duckdb
Our great sponsors
sqlite_scanner | duckdb | |
---|---|---|
4 | 51 | |
184 | 15,710 | |
3.3% | 10.4% | |
8.3 | 10.0 | |
about 1 month ago | 4 days ago | |
C | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sqlite_scanner
-
DuckDB 0.7.0
It's not a dumb question at all. I'm pretty knowledgeable with DBs and still find it very difficult to understand how many of these front-end/pass-through engines work.
Checkout Postgres Foreign Data Wrappers. That might be the most well known approach for accessing one database through another. The Supabase team wrote an interesting piece about this recently.
https://supabase.com/blog/postgres-foreign-data-wrappers-rus...
You might also want to try out duckdb's approach to reading other DBs (or DB files). They talk about how they can "import" a sqlite DB in the above 0.7.0 announcement, but also have some other examples in their duckdblabs github project. Check out their "...-scanner" repos:
-
Notes on the SQLite DuckDB Paper
DuckDB can actually read SQLite or Postgres directly! In the SQLite case, something like Litestream plus DuckDB could work really well!
Also, with Pyarrow's help, DuckDB can already do this with Delta tables!
-
Friendlier SQL with DuckDB
Excellent question! I'll jump in - I am a part of the DuckDB team though, so if other users have thoughts it would be great to get other perspectives as well.
First things first - we really like quite a lot about the SQLite approach. DuckDB is similarly easy to install and is built without dependencies, just like SQLite. It also runs in the same process as your application just like SQLite does. SQLite is excellent as a transactional database - lots of very specific inserts, updates, and deletes (called OLTP workloads). DuckDB can also read directly out of SQLite files as well, so you can mix and match them! (https://github.com/duckdblabs/sqlitescanner)
DuckDB is much faster than SQLite when doing analytical queries (OLAP) like when calculating summaries or trends over time, or joining large tables together. It can use all of your CPU cores for sometimes ~100x speedup over SQLite.
DuckDB also has some enhancements with respect to data transfer in and out of it. It can natively read Pandas, R, and Julia dataframes, and can read parquet files directly also (meaning without inserting first!).
Does that help? Happy to add more details!
duckdb
-
DuckDB performance improvements with the latest release
I'm not sure if the fix is reassuring or not: https://github.com/duckdb/duckdb/pull/9411/files
Just had a look (https://github.com/duckdb/duckdb/issues/9399). Yeah it's worrying that such a trivial query returned incorrect results - but credit to the Devs for getting it fixed quickly.
To my knowledge the only databases that can be described as "military-grade" in terms of testing are SQLite and Postgres.
-
Building a Distributed Data Warehouse Without Data Lakes
It's an interesting question!
The problem is that the data is spread everywhere - no choice about that. So with that in mind, how do you query that data? Today, the idea is that you HAVE to put it into a central location. With tools like Bacalhau[1] and DuckDB [2], you no longer have to - a single query can be sharded amongst all your data - EFFECTIVELY giving you a lot of what you want from a data lake.
It's not a replacement, but if you can do a few of these items WITHOUT moving the data, you will be able to see really significant cost and time savings.
- DuckDB 0.9.0
-
Push or Pull, is this a question?
[4] Switch to Push-Based Execution Model by Mytherin · Pull Request #2393 · duckdb/duckdb (github.com)
-
Show HN: Hydra 1.0 – open-source column-oriented Postgres
it depends on your query obviously.
In general, I did very deep benchmarking of pg, clickhouse and duckdb, and I sure didn't make stupid mistakes like this: https://news.ycombinator.com/item?id=36990831
My dataset has 50B rows and 2tb of data, and I think columnar dbs are very overhiped and I chose pg because:
- pg performance is acceptable, maybe 2-3x times slower than clickhouse and duckdb on some queries if pg is configured correctly and run on compressed storage
- clickhouse and duckdb start falling apart very fast because they specialized on very narrow type of queries: https://github.com/ClickHouse/ClickHouse/issues/47520 https://github.com/ClickHouse/ClickHouse/issues/47521 https://github.com/duckdb/duckdb/discussions/6696
-
🦆 Effortless Data Quality w/duckdb on GitHub ♾️
This action installs duckdb with the version provided in input.
-
Using SQL inside Python pipelines with Duckdb, Glaredb (and others?)
Duckdb: https://github.com/duckdb/duckdb - seems pretty popular, been keeping an eye on this for close to a year now.
-
CSV or Parquet File Format
The Parquet-Go library is very complex, not yet success to use it. So I ask whether DuckDB can provide API https://github.com/duckdb/duckdb/issues/7776
-
DuckDB 0.8.0
Another cool new feature that's not mentioned in the blog post is function chaining:
https://github.com/duckdb/duckdb/pull/6725
I've been using DuckDB for filtering and post-processing data, specially strings, and this will make writing complex queries easier. By combining nested functions[0] and text functions[1], sometimes I don't even need to go into a Python notebook.
What are some alternatives?
ClickHouse - ClickHouse® is a free analytics DBMS for big data
sqlite-worker - A simple, and persistent, SQLite database for Web and Workers.
datasette - An open source multi-tool for exploring and publishing data
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
metabase-clickhouse-driver - ClickHouse database driver for the Metabase business intelligence front-end
arrow-datafusion - Apache Arrow DataFusion SQL Query Engine
LevelDB - LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.
duckdb-rs - Ergonomic bindings to duckdb for Rust
toydb - Distributed SQL database in Rust, written as a learning project
TimescaleDB - An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
hydra - Hydra: Column-oriented Postgres. Add scalable analytics to your project in minutes.
Rx.NET - The Reactive Extensions for .NET