ClickBench VS Apache Arrow

Compare ClickBench vs Apache Arrow and see what are their differences.

Apache Arrow

Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing (by apache)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
ClickBench Apache Arrow
71 75
571 13,523
3.2% 1.1%
9.0 10.0
2 days ago 6 days ago
HTML C++
GNU General Public License v3.0 or later Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ClickBench

Posts with mentions or reviews of ClickBench. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-02.
  • Umbra: A Disk-Based System with In-Memory Performance [pdf]
    3 projects | news.ycombinator.com | 2 May 2024
    Benchmarks: https://benchmark.clickhouse.com

    So definitely compared against PostgreSQL, MariaDB it is significantly faster.

    On par with lower-end Snowflake.

  • Loading a trillion rows of weather data into TimescaleDB
    8 projects | news.ycombinator.com | 16 Apr 2024
    TimescaleDB primarily serves operational use cases: Developers building products on top of live data, where you are regularly streaming in fresh data, and you often know what many queries look like a priori, because those are powering your live APIs, dashboards, and product experience.

    That's different from a data warehouse or many traditional "OLAP" use cases, where you might dump a big dataset statically, and then people will occasionally do ad-hoc queries against it. This is the big weather dataset file sitting on your desktop that you occasionally query while on holidays.

    So it's less about "can you store weather data", but what does that use case look like? How are the queries shaped? Are you saving a single dataset for ad-hoc queries across the entire dataset, or continuously streaming in new data, and aging out or de-prioritizing old data?

    In most of the products we serve, customers are often interested in recent data in a very granular format ("shallow and wide"), or longer historical queries along a well defined axis ("deep and narrow").

    For example, this is where the benefits of TimescaleDB's segmented columnar compression emerges. It optimizes for those queries which are very common in your application, e.g., an IoT application that groups by or selected by deviceID, crypto/fintech analysis based on the ticker symbol, product analytics based on tenantID, etc.

    If you look at Clickbench, what most of the queries say are: Scan ALL the data in your database, and GROUP BY one of the 100 columns in the web analytics logs.

    - https://github.com/ClickHouse/ClickBench/blob/main/clickhous...

    There are almost no time-predicates in the benchmark that Clickhouse created, but perhaps that is not surprising given it was designed for ad-hoc weblog analytics at Yandex.

    So yes, Timescale serves many products today that use weather data, but has made different choices than Clickhouse (or things like DuckDB, pg_analytics, etc) to serve those more operational use cases.

  • Variant in Apache Doris 2.1.0: a new data type 8 times faster than JSON for semi-structured data analysis
    2 projects | dev.to | 27 Mar 2024
    We tested with 43 Clickbench SQL queries. Queries on the Variant columns are about 10% slower than those on pre-defined static columns, and 8 times faster than those on JSON columns. (For I/O reasons, most cold runs on JSONB data failed with OOM.)
  • Fair Benchmarking Considered Difficult (2018) [pdf]
    2 projects | news.ycombinator.com | 10 Mar 2024
    I have a project dedicated to this topic: https://github.com/ClickHouse/ClickBench

    It is important to explain the limitations of a benchmark, provide a methodology, and make it reproducible. It also has to be simple enough, otherwise it will not be realistic to include a large number of participants.

    I'm also collecting all database benchmarks I could find: https://github.com/ClickHouse/ClickHouse/issues/22398

  • ClickBench – A Benchmark for Analytical DBMS
    1 project | news.ycombinator.com | 8 Feb 2024
  • FLaNK Stack 05 Feb 2024
    49 projects | dev.to | 5 Feb 2024
  • Why Postgres RDS didn't work for us
    4 projects | news.ycombinator.com | 3 Feb 2024
    Indeed, ClickHouse results were run on an older instance type of the same family and size (c5.4xlarge for ClickHouse and c6a.4xlarge for Timescale), so if anything ClickHouse results are at a slight disadvantage.

    This is an open source benchmark - we'd love contributions from Timescale enthusiasts if we missed something: https://github.com/ClickHouse/ClickBench/

  • Show HN: Stanchion – Column-oriented tables in SQLite
    3 projects | news.ycombinator.com | 31 Jan 2024
    Interesting project! Thank you for open sourcing and sharing. Agree that local and embedded analytics are an increasing trend, I see it too.

    A couple of questions:

    * I’m curious what the difficulties were in the implementation. I suspect it is quite a challenge to implement this support in the current SQLite architecture, and would curious to know which parts were tricky and any design trade-off you were faced with.

    * Aside from ease-of-use (install extension, no need for a separate analytical database system), I wonder if there are additional benefits users can anticipate resulting from a single system architecture vs running an embedded OLAP store like DuckDB or clickhouse-local / chdb side-by-side with SQLite? Do you anticipate performance or resource efficiency gains, for instance?

    * I am also curious, what the main difficulty with bringing in a separate analytical database is, assuming it natively integrates with SQLite. I may be biased, but I doubt anything can approach the performance of native column-oriented systems, so I'm curious what the tipping point might be for using this extension vs using an embedded OLAP store in practice.

    Btw, would love for you or someone in the community to benchmark Stanchion in ClickBench and submit results! (https://github.com/ClickHouse/ClickBench/)

    Disclaimer: I work on ClickHouse.

  • ClickBench: A Benchmark for Analytical Databases
    1 project | news.ycombinator.com | 22 Jan 2024
  • DuckDB performance improvements with the latest release
    8 projects | news.ycombinator.com | 6 Nov 2023

Apache Arrow

Posts with mentions or reviews of Apache Arrow. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-05.
  • How moving from Pandas to Polars made me write better code without writing better code
    2 projects | dev.to | 5 Mar 2024
    In comes Polars: a brand new dataframe library, or how the author Ritchie Vink describes it... a query engine with a dataframe frontend. Polars is built on top of the Arrow memory format and is written in Rust, which is a modern performant and memory-safe systems programming language similar to C/C++.
  • From slow to SIMD: A Go optimization story
    10 projects | news.ycombinator.com | 23 Jan 2024
    I learned yesterday about GoLang's assembler https://go.dev/doc/asm - after browsing how arrow is implemented for different languages (my experience is mainly C/C++) - https://github.com/apache/arrow/tree/main/go/arrow/math - there are bunch of .S ("asm" files) and I'm still not able to comprehend how these work exactly (I guess it'll take more reading) - it seems very peculiar.

    The last time I've used inlined assembly was back in Turbo/Borland Pascal, then bit in Visual Studio (32-bit), until they got disabled. Then did very little gcc with their more strict specification (while the former you had to know how the ABI worked, the latter too - but it was specced out).

    Anyway - I wasn't expecting to find this in "Go" :) But I guess you can always start with .go code then produce assembly (-S) then optimize it, or find/hire someone to do it.

  • Time Series Analysis with Polars
    2 projects | dev.to | 10 Dec 2023
    One is related to the heritage of being built around the NumPy library, which is great for processing numerical data, but becomes an issue as soon as the data is anything else. Pandas 2.0 has started to bring in Arrow, but it's not yet the standard (you have to opt-in and according to the developers it's going to stay that way for the foreseeable future). Also, pandas's Arrow-based features are not yet entirely on par with its NumPy-based features. Polars was built around Arrow from the get go. This makes it very powerful when it comes to exchanging data with other languages and reducing the number of in-memory copying operations, thus leading to better performance.
  • TXR Lisp
    2 projects | news.ycombinator.com | 8 Dec 2023
    IMO a good first step would be to use the txr FFI to write a library for Apache arrow: https://arrow.apache.org/
  • 3D desktop Game Engine scriptable in Python
    5 projects | news.ycombinator.com | 1 Nov 2023
    https://www.reddit.com/r/O3DE/comments/rdvxhx/why_python/ :

    > Python is used for scripting the editor only, not in-game behaviors.

    > For implementing entity behaviors the only out of box ways are C++, ScriptCanvas (visual scripting) or Lua. Python is currently not available for implementing game logic.

    C++, Lua, and Python all implement CFFI (C Foreign Function Interface) for remote function and method calls.

    "Using CFFI for embedding" https://cffi.readthedocs.io/en/latest/embedding.html :

    > You can use CFFI to generate C code which exports the API of your choice to any C application that wants to link with this C code. This API, which you define yourself, ends up as the API of a .so/.dll/.dylib library—or you can statically link it within a larger application.

    Apache Arrow already supports C, C++, Python, Rust, Go and has C GLib support Lua:

    https://github.com/apache/arrow/tree/main/c_glib/example/lua :

    > Arrow Lua example: All example codes use LGI to use Arrow GLib based bindings

    pyarrow.from_numpy_dtype:

  • Show HN: Udsv.js – A faster CSV parser in 5KB (min)
    3 projects | news.ycombinator.com | 4 Sep 2023
  • Interacting with Amazon S3 using AWS Data Wrangler (awswrangler) SDK for Pandas: A Comprehensive Guide
    5 projects | dev.to | 20 Aug 2023
    AWS Data Wrangler is a Python library that simplifies the process of interacting with various AWS services, built on top of some useful data tools and open-source projects such as Pandas, Apache Arrow and Boto3. It offers streamlined functions to connect to, retrieve, transform, and load data from AWS services, with a strong focus on Amazon S3.
  • Cap'n Proto 1.0
    10 projects | news.ycombinator.com | 28 Jul 2023
    Worker should really adopt Apache Arrow, which has a much bigger ecosystem.

    https://github.com/apache/arrow

  • C++ Jobs - Q3 2023
    3 projects | /r/cpp | 4 Jul 2023
    Apache Arrow
  • Wheel fails for pyarrow installation
    1 project | /r/learnpython | 16 Jun 2023
    I am aware of the fact that there are other posts about this issue but none of the ideas to solve it worked for me or sometimes none were found. The issue was discussed in the wheel git hub last December and seems to be solved but then it seems like I'm installing the wrong version? I simply used pip3 install pyarrow, is that wrong?

What are some alternatives?

When comparing ClickBench and Apache Arrow you can also consider the following projects:

starrocks - StarRocks, a Linux Foundation project, is a next-generation sub-second MPP OLAP database for full analytics scenarios, including multi-dimensional analytics, real-time analytics, and ad-hoc queries. InfoWorld’s 2023 BOSSIE Award for best open source software.

Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows

duckdb - DuckDB is an in-process SQL OLAP Database Management System

h5py - HDF5 for Python -- The h5py package is a Pythonic interface to the HDF5 binary data format.

ClickHouse - ClickHouse® is a free analytics DBMS for big data

Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing

hosts - 🔒 Consolidating and extending hosts files from several well-curated sources. Optionally pick extensions for porn, social media, and other categories.

FlatBuffers - FlatBuffers: Memory Efficient Serialization Library

TablePlus - TablePlus macOS issue tracker

polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust

clickhouse-bulk - Collects many small inserts to ClickHouse and send in big inserts