duckdb
ClickHouse
Our great sponsors
duckdb | ClickHouse | |
---|---|---|
51 | 207 | |
15,710 | 33,712 | |
10.4% | 2.4% | |
10.0 | 10.0 | |
5 days ago | about 14 hours ago | |
C++ | C++ | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
duckdb
-
DuckDB performance improvements with the latest release
I'm not sure if the fix is reassuring or not: https://github.com/duckdb/duckdb/pull/9411/files
Just had a look (https://github.com/duckdb/duckdb/issues/9399). Yeah it's worrying that such a trivial query returned incorrect results - but credit to the Devs for getting it fixed quickly.
To my knowledge the only databases that can be described as "military-grade" in terms of testing are SQLite and Postgres.
-
Building a Distributed Data Warehouse Without Data Lakes
It's an interesting question!
The problem is that the data is spread everywhere - no choice about that. So with that in mind, how do you query that data? Today, the idea is that you HAVE to put it into a central location. With tools like Bacalhau[1] and DuckDB [2], you no longer have to - a single query can be sharded amongst all your data - EFFECTIVELY giving you a lot of what you want from a data lake.
It's not a replacement, but if you can do a few of these items WITHOUT moving the data, you will be able to see really significant cost and time savings.
- DuckDB 0.9.0
-
Push or Pull, is this a question?
[4] Switch to Push-Based Execution Model by Mytherin · Pull Request #2393 · duckdb/duckdb (github.com)
-
Show HN: Hydra 1.0 – open-source column-oriented Postgres
it depends on your query obviously.
In general, I did very deep benchmarking of pg, clickhouse and duckdb, and I sure didn't make stupid mistakes like this: https://news.ycombinator.com/item?id=36990831
My dataset has 50B rows and 2tb of data, and I think columnar dbs are very overhiped and I chose pg because:
- pg performance is acceptable, maybe 2-3x times slower than clickhouse and duckdb on some queries if pg is configured correctly and run on compressed storage
- clickhouse and duckdb start falling apart very fast because they specialized on very narrow type of queries: https://github.com/ClickHouse/ClickHouse/issues/47520 https://github.com/ClickHouse/ClickHouse/issues/47521 https://github.com/duckdb/duckdb/discussions/6696
-
🦆 Effortless Data Quality w/duckdb on GitHub ♾️
This action installs duckdb with the version provided in input.
-
Using SQL inside Python pipelines with Duckdb, Glaredb (and others?)
Duckdb: https://github.com/duckdb/duckdb - seems pretty popular, been keeping an eye on this for close to a year now.
-
CSV or Parquet File Format
The Parquet-Go library is very complex, not yet success to use it. So I ask whether DuckDB can provide API https://github.com/duckdb/duckdb/issues/7776
-
DuckDB 0.8.0
Another cool new feature that's not mentioned in the blog post is function chaining:
https://github.com/duckdb/duckdb/pull/6725
I've been using DuckDB for filtering and post-processing data, specially strings, and this will make writing complex queries easier. By combining nested functions[0] and text functions[1], sometimes I don't even need to go into a Python notebook.
ClickHouse
-
Build time is a collective responsibility
In our repository, I've set up a few hard limits: each translation unit cannot spend more than a certain amount of memory for compilation and a certain amount of CPU time, and the compiled binary has to be not larger than a certain size.
When these limits are reached, the CI stops working, and we have to remove the bloat: https://github.com/ClickHouse/ClickHouse/issues/61121
Although these limits are too generous as of today: for example, the maximum CPU time to compile a translation unit is set to 1000 seconds, and the memory limit is 5 GB, which is ridiculously high.
-
Fair Benchmarking Considered Difficult (2018) [pdf]
I have a project dedicated to this topic: https://github.com/ClickHouse/ClickBench
It is important to explain the limitations of a benchmark, provide a methodology, and make it reproducible. It also has to be simple enough, otherwise it will not be realistic to include a large number of participants.
I'm also collecting all database benchmarks I could find: https://github.com/ClickHouse/ClickHouse/issues/22398
-
How to choose the right type of database
ClickHouse: A fast open-source column-oriented database management system. ClickHouse is designed for real-time analytics on large datasets and excels in high-speed data insertion and querying, making it ideal for real-time monitoring and reporting.
-
Writing UDF for Clickhouse using Golang
Today we're going to create an UDF (User-defined Function) in Golang that can be run inside Clickhouse query, this function will parse uuid v1 and return timestamp of it since Clickhouse doesn't have this function for now. Inspired from the python version with TabSeparated delimiter (since it's easiest to parse), UDF in Clickhouse will read line by line (each row is each line, and each text separated with tab is each column/cell value):
-
The 2024 Web Hosting Report
For the third, examples here might be analytics plugins in specialized databases like Clickhouse, data-transformations in places like your ETL pipeline using Airflow or Fivetran, or special integrations in your authentication workflow with Auth0 hooks and rules.
-
Choosing Between a Streaming Database and a Stream Processing Framework in Python
Online analytical processing (OLAP) databases like Apache Druid, Apache Pinot, and ClickHouse shine in addressing user-initiated analytical queries. You might write a query to analyze historical data to find the most-clicked products over the past month efficiently using OLAP databases. When contrasting with streaming databases, they may not be optimized for incremental computation, leading to challenges in maintaining the freshness of results. The query in the streaming database focuses on recent data, making it suitable for continuous monitoring. Using streaming databases, you can run queries like finding the top 10 sold products where the “top 10 product list” might change in real-time.
-
Proton, a fast and lightweight alternative to Apache Flink
Proton is a lightweight streaming processing "add-on" for ClickHouse, and we are making these delta parts as standalone as possible. Meanwhile contributing back to the ClickHouse community can also help a lot.
Please check this PR from the proton team: https://github.com/ClickHouse/ClickHouse/pull/54870
-
We Executed a Critical Supply Chain Attack on PyTorch
But I continue to find garbage in some of our CI scripts.
Here is an example: https://github.com/ClickHouse/ClickHouse/pull/58794/files
The right way is to:
- always pin versions of all packages;
Recently, there were similar attempts (two) of supply chain attacks on the ClickHouse repository, but: - it didn't do anything because CI does not run without approval; - the user's account magically disappeared from GitHub with all pull requests within a day.
Also worth reading a similar example: https://blog.cloudflare.com/cloudflares-handling-of-an-rce-v...
Also, let me recommend our bug bounty program: https://github.com/ClickHouse/ClickHouse/issues/38986 It sounds easy - pick your favorite fuzzer, find a segfault (it should be easy because C++ isn't a memory-safe language), and get your paycheck.
-
Why does musl make my Rust code so slow? (2020)
It is the case when you use a default malloc, default memcpy, or default string functions from libc.
In ClickHouse, we use jemalloc as a memory allocator and custom memcpy: https://github.com/ClickHouse/ClickHouse/blob/master/base/gl...
So, the Musl build does not imply performance degradations. But the usage of Musl is not related to Docker, because ClickHouse is a single self-contained binary anyway, and it is easy to use without Docker.
What are some alternatives?
loki - Like Prometheus, but for logs.
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
VictoriaMetrics - VictoriaMetrics: fast, cost-effective monitoring solution and time series database
TimescaleDB - An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
arrow-datafusion - Apache Arrow DataFusion SQL Query Engine
RocksDB - A library that provides an embeddable, persistent key-value store for fast storage.
materialize - The data warehouse for operational workloads.
PostgreSQL - Mirror of the official PostgreSQL GIT repository. Note that this is just a *mirror* - we don't work with pull requests on github. To contribute, please see https://wiki.postgresql.org/wiki/Submitting_a_Patch
TileDB - The Universal Storage Engine
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
Adminer - Database management in a single PHP file
sqlite-worker - A simple, and persistent, SQLite database for Web and Workers.