SaaSHub helps you find the best software and product alternatives Learn more →
DuckDB Alternatives
Similar projects and alternatives to DuckDB
-
-
InfluxDB
InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
-
-
-
-
react-admin
A frontend Framework for single-page applications on top of REST/GraphQL APIs, using TypeScript, React and Material Design
-
Stream
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
-
-
-
-
octosql
OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
-
-
-
-
tidy-viewer
📺(tv) Tidy Viewer is a cross-platform CLI csv pretty printer that uses column styling to maximize viewer enjoyment.
-
-
-
-
TimescaleDB
A time-series database for high-performance real-time analytics packaged as a Postgres extension
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
DuckDB discussion
DuckDB reviews and mentions
-
ClickHouse raises $350M Series C
Thanks for creating this issue, it is worth investigating!
I see you also created similar issues in Polars: https://github.com/pola-rs/polars/issues/17932 and DuckDB: https://github.com/duckdb/duckdb/issues/17066
ClickHouse has a built-in memory tracker, so even if there is not enough memory, it will stop the query and send an exception to the client, instead of crashing. It also allows fair sharing of memory between different workloads.
You need to provide more info on the issue for reproduction, e.g., how to fill the tables. 16 GB of memory should be enough even for a CROSS JOIN between a 10 billion-row and a 100-row table, because it is processed in a streaming fashion without accumulating a large amount of data in memory. The same should be true for a merge join.
However, there are places when a large buffer might be needed. For example, if you insert data into a table backed by S3 storage, it requires a buffer that can be in the order of 500 MB.
There is a possibility that your machine has 16 GB of memory, but most of it is consumed by Chrome, Slack, or Safari, and not much is left for ClickHouse server.
-
ClickHouse gets lazier (and faster): Introducing lazy materialization
It does, but the performance isn't great apparently: https://github.com/duckdb/duckdb/discussions/10161
- DuckDB 1.2.2 Released
-
The DuckDB Local UI
I agree that the blog post seems to hint at the fact that this functionality is fully baked in in certain places - we've adjusted the blog post to be more explicit on the fact that this is an extension.
We have collaborated with MotherDuck on streamlining the experience of launching the UI through auto-installation, but the DuckDB Foundation still remains in full control of DuckDB and the extension ecosystem. This has no impact on that.
For further clarification:
* The auto-installation mechanism is identical to that of other trusted extensions - the auto-installation is triggered when a specific function is called that does not exist in the catalog - in this case the `start_ui` function. See [1]. The query I mentioned just calls that function. The only special feature here is the addition of the CLI flag (and what that flag executes is user-configurable).
* The HTTP server is necessary for the extension to function as the extension needs to communicate with the browser. The server is open-source as part of the extension code [2]. The server (1) fetches web resources (javascript/css) from ui.duckdb.org, and (2) communicates with localhost to co-ordinate the UI with DuckDB. Outside of these the server doesn't interface with other external web services.
[1] https://github.com/duckdb/duckdb/blob/main/src/include/duckd...
- Should You Ditch Spark for DuckDB or Polars?
-
Gah – CLI to install software from GitHub Releases
1) https://github.com/duckdb/duckdb/releases/download/v1.1.3/duckdb_cli-linux- amd64.zip
-
Show HN: Trilogy – A Reusable, Composable SQL Experiment
Any particular examples you have in mind? The demo is just referencing https://github.com/duckdb/duckdb/tree/main/extension/tpcds/d... which I wouldn't regard as a standard of good SQL; (implicit joins, yikes!) - but is a useful capability reference (as is tpc-ds in general).
As I tried to convey, I like SQL a lot - my frustration is more around the lifecycle and maintainability.
Happy to add more ergonomic references in other places, if you have some good examples to reference against?
-
SQL-92 in TPC Benchmarks: Are They Still Relevant?
I was reading "pg_duckdb beta release: Even faster analytics in Postgres", which demonstrates that the execution of TPC-DS Query 01 is 1500 times faster on DuckDB compared to PostgreSQL. Naturally, I was curious to see how this query performs in YugabyteDB. However, when I examined the SQL query that was used, which repeatedly accesses the same table and conducts analytics without utilizing analytic functions, I wondered: should we be spending time, in 2024, examining queries from analytics benchmarks that were written on SQL-92 while ignoring the window functions introduced in SQL:2003?
- DuckDB v1.1.2
-
DuckDB 1.1.0 Released
The last I read, the Spark API was to become the focus point.
https://duckdb.org/docs/api/python/spark_api
Not sure what the current status is.
ref: <https://github.com/duckdb/duckdb/issues/2000#issuecomment-18...>
-
A note from our sponsor - SaaSHub
www.saashub.com | 13 Jul 2025
Stats
duckdb/duckdb is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of DuckDB is C++.