arc VS db-benchmark

Compare arc vs db-benchmark and see what are their differences.

arc

Arc is an opinionated framework for defining data pipelines which are predictable, repeatable and manageable. (by tripl-ai)

db-benchmark

reproducible benchmark of database-like ops (by h2oai)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
arc db-benchmark
14 91
166 319
1.8% 0.9%
5.3 0.0
2 months ago 10 months ago
Scala R
MIT License Mozilla Public License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

arc

Posts with mentions or reviews of arc. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-11-30.
  • Show HN: Box – Data Transformation Pipelines in Rust DataFusion
    4 projects | news.ycombinator.com | 30 Nov 2021
    A while ago I posted a link to [Arc](https://news.ycombinator.com/item?id=26573930) a declarative method for defining repeatable data pipelines which execute against [Apache Spark](https://spark.apache.org/).

    Today I would like to present a proof-of-concept implementation of the [Arc declarative ETL framework](https://arc.tripl.ai) against [Apache Datafusion](https://arrow.apache.org/datafusion/) which is an Ansi SQL (Postgres) execution engine based upon Apache Arrow and built with Rust.

    The idea of providing a declarative 'configuration' language for defining data pipelines was planned from the beginning of the Arc project to allow changing execution engines without having to rewrite the base business logic (the part that is valuable to your business). Instead, by defining an abstraction layer, we can change the execution engine and run the same logic with different execution characteristics.

    The benefit of the DataFusion over Apache Spark is a significant increase in speed and reduction in execution resource requirements. Even through a Docker-for-Mac inefficiency layer the same job completes in ~4 seconds with DataFusion vs ~24 seconds with Apache Spark (including JVM startup time). Without Docker-for-Mac layer end-to-end execution times of 0.5 second for the same example job (TPC-H) is possible. * the aim is not to start a benchmarking flamewar but to provide some indicative data *.

    The purpose of this post is to gather feedback from the community whether you would use a tool like this, what features would be required for you to use it (MVP) or whether you would be interested in contributing to the project. I would also like to highlight the excellent work being done by the DataFusion/Arrow (and Apache) community for providing such amazing tools to us all as open source projects.

  • Apache Arrow Datafusion 5.0.0 release
    6 projects | news.ycombinator.com | 24 Aug 2021
    Disclosure: I am a contributor to Datafusion.

    I have done a lot of work in the ETL space in Apache Spark to build Arc (https://arc.tripl.ai/) and have ported a lot of the basic functionality of Arc to Datafusion as a proof-of-concept. The appeal to me of the Apache Spark and Datafusion engines is the ability to a) seperate compute and storage b) express transformation logic in SQL.

    Performance: From those early experiments Datafusion would frequently finish processing an entire job _before_ the SparkContext could be started - even on a local Spark instance. Obviously this is at smaller data sizes but in my experience a lot of ETL is about repeatable processes not necessarily huge datasets.

    Compatibility: Those experiments were done a few months ago and the SQL compatibility of the Datafusion engine has improved extremely rapidly (WINDOW functions were recently added). There is still some missing SQL functionality (for example to run all the TPC-H queries https://github.com/apache/arrow-datafusion/tree/master/bench...) but it is moving quickly.

  • Arc - an opinionated framework for defining data pipelines which are predictable, repeatable and manageable.
    1 project | /r/bigdata | 25 Mar 2021
    1 project | /r/coding | 25 Mar 2021
    1 project | /r/programming | 25 Mar 2021
    2 projects | /r/functionalprogramming | 25 Mar 2021
    1 project | /r/dataengineering | 25 Mar 2021
    1 project | /r/scala | 25 Mar 2021
    1 project | /r/coolgithubprojects | 25 Mar 2021
    1 project | /r/opensource | 25 Mar 2021

db-benchmark

Posts with mentions or reviews of db-benchmark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-08.
  • Database-Like Ops Benchmark
    1 project | news.ycombinator.com | 28 Jan 2024
  • Polars
    11 projects | news.ycombinator.com | 8 Jan 2024
    Real-world performance is complicated since data science covers a lot of use cases.

    If you're just reading a small CSV to do analysis on it, then there will be no human-perceptible difference between Polars and Pandas. If you're reading a larger CSV with 100k rows, there still won't be much of a perceptible difference.

    Per this (old) benchmark, there are differences once you get into 500MB+ territory: https://h2oai.github.io/db-benchmark/

  • DuckDB performance improvements with the latest release
    8 projects | news.ycombinator.com | 6 Nov 2023
    I do think it was important for duckdb to put out a new version of the results as the earlier version of that benchmark [1] went dormant with a very old version of duckdb with very bad performance, especially against polars.

    [1] https://h2oai.github.io/db-benchmark/

  • Show HN: SimSIMD vs. SciPy: How AVX-512 and SVE make SIMD cleaner and ML faster
    16 projects | news.ycombinator.com | 7 Oct 2023
    https://news.ycombinator.com/item?id=33270638 :

    > Apache Ballista and Polars do Apache Arrow and SIMD.

    > The Polars homepage links to the "Database-like ops benchmark" of {Polars, data.table, DataFrames.jl, ClickHouse, cuDF, spark, (py)datatable, dplyr, pandas, dask, Arrow, DuckDB, Modin,} but not yet PostgresML? https://h2oai.github.io/db-benchmark/ *

    LLM -> Vector database: https://en.wikipedia.org/wiki/Vector_database

    /? inurl:awesome site:github.com "vector database"

  • Pandas vs. Julia – cheat sheet and comparison
    7 projects | news.ycombinator.com | 17 May 2023
    I agree with your conclusion but want to add that switching from Julia may not make sense either.

    According to these benchmarks: https://h2oai.github.io/db-benchmark/, DF.jl is the fastest library for some things, data.table for others, polars for others. Which is fastest depends on the query and whether it takes advantage of the features/properties of each.

    For what it's worth, data.table is my favourite to use and I believe it has the nicest ergonomics of the three I spoke about.

  • Any faster Python alternatives?
    6 projects | /r/learnprogramming | 12 Apr 2023
    Same. Numba does wonders for me in most scenarios. Yesterday I've discovered pola-rs and looks like I will add it to the stack. It's API is similar to pandas. Have a look at the benchmarks of cuDF, spark, dask, pandas compared to it: Benchmarks
  • Pandas 2.0 (with pyarrow) vs Pandas 1.3 - Performance comparison
    1 project | /r/datascience | 8 Apr 2023
    The syntax has similarities with dplyr in terms of the way you chain operations, and it’s around an order of magnitude faster than pandas and dplyr (there’s a nice benchmark here). It’s also more memory-efficient and can handle larger-than-memory datasets via streaming if needed.
  • Pandas v2.0 Released
    5 projects | news.ycombinator.com | 3 Apr 2023
    If interested in benchmarks comparing different dataframe implementations, here is one:

    https://h2oai.github.io/db-benchmark/

  • Database-like ops benchmark
    1 project | /r/dataengineering | 16 Feb 2023
  • Python "programmers" when I show them how much faster their naive code runs when translated to C++ (this is a joke, I love python)
    2 projects | /r/ProgrammerHumor | 17 Jan 2023
    Bad examples. Both numpy and pandas are notoriously un-optimized packages, losing handily to pretty much all their competitors (R, Julia, kdb+, vaex, polars). See https://h2oai.github.io/db-benchmark/ for a partial comparison.