open-data
ClickBench
open-data | ClickBench | |
---|---|---|
1 | 72 | |
78 | 587 | |
- | 2.9% | |
6.4 | 9.1 | |
4 months ago | 7 days ago | |
HTML | ||
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
open-data
-
Loading a trillion rows of weather data into TimescaleDB
Creator of Open-Meteo here. There is small tutorial to setup ERA5 locally: https://github.com/open-meteo/open-data/tree/main/tutorial_d...
Under the hood Open-Meteo is using a custom file format with time-series chunking and specialised compression for low-frequency weather data. General purpose time-series databases do not even get close to this setup.
ClickBench
-
Pg_lakehouse: Query Any Data Lake from Postgres
You can see performance comparison to Hydra on ClickBench: https://benchmark.clickhouse.com/ by selecting ParadeDB and Hydra. Tl;dr: It is much faster.
From a feature-set perspective, in addition to querying local disk, we can query remote object stores (S3, GCS, etc.), table format providers (Delta Lake, soon Iceberg too).
From a code perspective, we're written in Rust on top of open-source standards like OpenDAL and DataFusion, while Hydra is their own codebase built from a fork of Citus columnar, in C.
Hydra is a cool project. Hope this helps! :)
-
Umbra: A Disk-Based System with In-Memory Performance [pdf]
Benchmarks: https://benchmark.clickhouse.com
So definitely compared against PostgreSQL, MariaDB it is significantly faster.
On par with lower-end Snowflake.
-
Loading a trillion rows of weather data into TimescaleDB
TimescaleDB primarily serves operational use cases: Developers building products on top of live data, where you are regularly streaming in fresh data, and you often know what many queries look like a priori, because those are powering your live APIs, dashboards, and product experience.
That's different from a data warehouse or many traditional "OLAP" use cases, where you might dump a big dataset statically, and then people will occasionally do ad-hoc queries against it. This is the big weather dataset file sitting on your desktop that you occasionally query while on holidays.
So it's less about "can you store weather data", but what does that use case look like? How are the queries shaped? Are you saving a single dataset for ad-hoc queries across the entire dataset, or continuously streaming in new data, and aging out or de-prioritizing old data?
In most of the products we serve, customers are often interested in recent data in a very granular format ("shallow and wide"), or longer historical queries along a well defined axis ("deep and narrow").
For example, this is where the benefits of TimescaleDB's segmented columnar compression emerges. It optimizes for those queries which are very common in your application, e.g., an IoT application that groups by or selected by deviceID, crypto/fintech analysis based on the ticker symbol, product analytics based on tenantID, etc.
If you look at Clickbench, what most of the queries say are: Scan ALL the data in your database, and GROUP BY one of the 100 columns in the web analytics logs.
- https://github.com/ClickHouse/ClickBench/blob/main/clickhous...
There are almost no time-predicates in the benchmark that Clickhouse created, but perhaps that is not surprising given it was designed for ad-hoc weblog analytics at Yandex.
So yes, Timescale serves many products today that use weather data, but has made different choices than Clickhouse (or things like DuckDB, pg_analytics, etc) to serve those more operational use cases.
-
Variant in Apache Doris 2.1.0: a new data type 8 times faster than JSON for semi-structured data analysis
We tested with 43 Clickbench SQL queries. Queries on the Variant columns are about 10% slower than those on pre-defined static columns, and 8 times faster than those on JSON columns. (For I/O reasons, most cold runs on JSONB data failed with OOM.)
-
Fair Benchmarking Considered Difficult (2018) [pdf]
I have a project dedicated to this topic: https://github.com/ClickHouse/ClickBench
It is important to explain the limitations of a benchmark, provide a methodology, and make it reproducible. It also has to be simple enough, otherwise it will not be realistic to include a large number of participants.
I'm also collecting all database benchmarks I could find: https://github.com/ClickHouse/ClickHouse/issues/22398
- ClickBench – A Benchmark for Analytical DBMS
- FLaNK Stack 05 Feb 2024
-
Why Postgres RDS didn't work for us
Indeed, ClickHouse results were run on an older instance type of the same family and size (c5.4xlarge for ClickHouse and c6a.4xlarge for Timescale), so if anything ClickHouse results are at a slight disadvantage.
This is an open source benchmark - we'd love contributions from Timescale enthusiasts if we missed something: https://github.com/ClickHouse/ClickBench/
-
Show HN: Stanchion – Column-oriented tables in SQLite
Interesting project! Thank you for open sourcing and sharing. Agree that local and embedded analytics are an increasing trend, I see it too.
A couple of questions:
* I’m curious what the difficulties were in the implementation. I suspect it is quite a challenge to implement this support in the current SQLite architecture, and would curious to know which parts were tricky and any design trade-off you were faced with.
* Aside from ease-of-use (install extension, no need for a separate analytical database system), I wonder if there are additional benefits users can anticipate resulting from a single system architecture vs running an embedded OLAP store like DuckDB or clickhouse-local / chdb side-by-side with SQLite? Do you anticipate performance or resource efficiency gains, for instance?
* I am also curious, what the main difficulty with bringing in a separate analytical database is, assuming it natively integrates with SQLite. I may be biased, but I doubt anything can approach the performance of native column-oriented systems, so I'm curious what the tipping point might be for using this extension vs using an embedded OLAP store in practice.
Btw, would love for you or someone in the community to benchmark Stanchion in ClickBench and submit results! (https://github.com/ClickHouse/ClickBench/)
Disclaimer: I work on ClickHouse.
- ClickBench: A Benchmark for Analytical Databases
What are some alternatives?
qstudio - qStudio - Free SQL Analysis Tool
starrocks - StarRocks, a Linux Foundation project, is a next-generation sub-second MPP OLAP database for full analytics scenarios, including multi-dimensional analytics, real-time analytics, and ad-hoc queries.
hosts - 🔒 Consolidating and extending hosts files from several well-curated sources. Optionally pick extensions for porn, social media, and other categories.
duckdb - DuckDB is an analytical in-process SQL database management system
ClickHouse - ClickHouse® is a real-time analytics DBMS
TablePlus - TablePlus macOS issue tracker
clickhouse-bulk - Collects many small inserts to ClickHouse and send in big inserts
datafusion - Apache DataFusion SQL Query Engine
sqlite_scanner - DuckDB extension to read and write to SQLite databases
hydra - Hydra: Column-oriented Postgres. Add scalable analytics to your project in minutes.
Crate - CrateDB is a distributed and scalable SQL database for storing and analyzing massive amounts of data in near real-time, even with complex queries. It is PostgreSQL-compatible, and based on Lucene.
uptrace - Open source APM: OpenTelemetry traces, metrics, and logs