arc VS Apache Arrow

Compare arc vs Apache Arrow and see what are their differences.

arc

Arc is an opinionated framework for defining data pipelines which are predictable, repeatable and manageable. (by tripl-ai)

Apache Arrow

Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing (by apache)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
arc Apache Arrow
14 75
166 13,523
1.8% 2.5%
5.3 10.0
3 months ago 1 day ago
Scala C++
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

arc

Posts with mentions or reviews of arc. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-11-30.
  • Show HN: Box – Data Transformation Pipelines in Rust DataFusion
    4 projects | news.ycombinator.com | 30 Nov 2021
    A while ago I posted a link to [Arc](https://news.ycombinator.com/item?id=26573930) a declarative method for defining repeatable data pipelines which execute against [Apache Spark](https://spark.apache.org/).

    Today I would like to present a proof-of-concept implementation of the [Arc declarative ETL framework](https://arc.tripl.ai) against [Apache Datafusion](https://arrow.apache.org/datafusion/) which is an Ansi SQL (Postgres) execution engine based upon Apache Arrow and built with Rust.

    The idea of providing a declarative 'configuration' language for defining data pipelines was planned from the beginning of the Arc project to allow changing execution engines without having to rewrite the base business logic (the part that is valuable to your business). Instead, by defining an abstraction layer, we can change the execution engine and run the same logic with different execution characteristics.

    The benefit of the DataFusion over Apache Spark is a significant increase in speed and reduction in execution resource requirements. Even through a Docker-for-Mac inefficiency layer the same job completes in ~4 seconds with DataFusion vs ~24 seconds with Apache Spark (including JVM startup time). Without Docker-for-Mac layer end-to-end execution times of 0.5 second for the same example job (TPC-H) is possible. * the aim is not to start a benchmarking flamewar but to provide some indicative data *.

    The purpose of this post is to gather feedback from the community whether you would use a tool like this, what features would be required for you to use it (MVP) or whether you would be interested in contributing to the project. I would also like to highlight the excellent work being done by the DataFusion/Arrow (and Apache) community for providing such amazing tools to us all as open source projects.

  • Apache Arrow Datafusion 5.0.0 release
    6 projects | news.ycombinator.com | 24 Aug 2021
    Disclosure: I am a contributor to Datafusion.

    I have done a lot of work in the ETL space in Apache Spark to build Arc (https://arc.tripl.ai/) and have ported a lot of the basic functionality of Arc to Datafusion as a proof-of-concept. The appeal to me of the Apache Spark and Datafusion engines is the ability to a) seperate compute and storage b) express transformation logic in SQL.

    Performance: From those early experiments Datafusion would frequently finish processing an entire job _before_ the SparkContext could be started - even on a local Spark instance. Obviously this is at smaller data sizes but in my experience a lot of ETL is about repeatable processes not necessarily huge datasets.

    Compatibility: Those experiments were done a few months ago and the SQL compatibility of the Datafusion engine has improved extremely rapidly (WINDOW functions were recently added). There is still some missing SQL functionality (for example to run all the TPC-H queries https://github.com/apache/arrow-datafusion/tree/master/bench...) but it is moving quickly.

  • Arc - an opinionated framework for defining data pipelines which are predictable, repeatable and manageable.
    1 project | /r/bigdata | 25 Mar 2021
    1 project | /r/coding | 25 Mar 2021
    1 project | /r/programming | 25 Mar 2021
    2 projects | /r/functionalprogramming | 25 Mar 2021
    1 project | /r/dataengineering | 25 Mar 2021
    1 project | /r/scala | 25 Mar 2021
    1 project | /r/coolgithubprojects | 25 Mar 2021
    1 project | /r/opensource | 25 Mar 2021

Apache Arrow

Posts with mentions or reviews of Apache Arrow. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-05.
  • How moving from Pandas to Polars made me write better code without writing better code
    2 projects | dev.to | 5 Mar 2024
    In comes Polars: a brand new dataframe library, or how the author Ritchie Vink describes it... a query engine with a dataframe frontend. Polars is built on top of the Arrow memory format and is written in Rust, which is a modern performant and memory-safe systems programming language similar to C/C++.
  • From slow to SIMD: A Go optimization story
    10 projects | news.ycombinator.com | 23 Jan 2024
    I learned yesterday about GoLang's assembler https://go.dev/doc/asm - after browsing how arrow is implemented for different languages (my experience is mainly C/C++) - https://github.com/apache/arrow/tree/main/go/arrow/math - there are bunch of .S ("asm" files) and I'm still not able to comprehend how these work exactly (I guess it'll take more reading) - it seems very peculiar.

    The last time I've used inlined assembly was back in Turbo/Borland Pascal, then bit in Visual Studio (32-bit), until they got disabled. Then did very little gcc with their more strict specification (while the former you had to know how the ABI worked, the latter too - but it was specced out).

    Anyway - I wasn't expecting to find this in "Go" :) But I guess you can always start with .go code then produce assembly (-S) then optimize it, or find/hire someone to do it.

  • Time Series Analysis with Polars
    2 projects | dev.to | 10 Dec 2023
    One is related to the heritage of being built around the NumPy library, which is great for processing numerical data, but becomes an issue as soon as the data is anything else. Pandas 2.0 has started to bring in Arrow, but it's not yet the standard (you have to opt-in and according to the developers it's going to stay that way for the foreseeable future). Also, pandas's Arrow-based features are not yet entirely on par with its NumPy-based features. Polars was built around Arrow from the get go. This makes it very powerful when it comes to exchanging data with other languages and reducing the number of in-memory copying operations, thus leading to better performance.
  • TXR Lisp
    2 projects | news.ycombinator.com | 8 Dec 2023
    IMO a good first step would be to use the txr FFI to write a library for Apache arrow: https://arrow.apache.org/
  • 3D desktop Game Engine scriptable in Python
    5 projects | news.ycombinator.com | 1 Nov 2023
    https://www.reddit.com/r/O3DE/comments/rdvxhx/why_python/ :

    > Python is used for scripting the editor only, not in-game behaviors.

    > For implementing entity behaviors the only out of box ways are C++, ScriptCanvas (visual scripting) or Lua. Python is currently not available for implementing game logic.

    C++, Lua, and Python all implement CFFI (C Foreign Function Interface) for remote function and method calls.

    "Using CFFI for embedding" https://cffi.readthedocs.io/en/latest/embedding.html :

    > You can use CFFI to generate C code which exports the API of your choice to any C application that wants to link with this C code. This API, which you define yourself, ends up as the API of a .so/.dll/.dylib library—or you can statically link it within a larger application.

    Apache Arrow already supports C, C++, Python, Rust, Go and has C GLib support Lua:

    https://github.com/apache/arrow/tree/main/c_glib/example/lua :

    > Arrow Lua example: All example codes use LGI to use Arrow GLib based bindings

    pyarrow.from_numpy_dtype:

  • Show HN: Udsv.js – A faster CSV parser in 5KB (min)
    3 projects | news.ycombinator.com | 4 Sep 2023
  • Interacting with Amazon S3 using AWS Data Wrangler (awswrangler) SDK for Pandas: A Comprehensive Guide
    5 projects | dev.to | 20 Aug 2023
    AWS Data Wrangler is a Python library that simplifies the process of interacting with various AWS services, built on top of some useful data tools and open-source projects such as Pandas, Apache Arrow and Boto3. It offers streamlined functions to connect to, retrieve, transform, and load data from AWS services, with a strong focus on Amazon S3.
  • Cap'n Proto 1.0
    10 projects | news.ycombinator.com | 28 Jul 2023
    Worker should really adopt Apache Arrow, which has a much bigger ecosystem.

    https://github.com/apache/arrow

  • C++ Jobs - Q3 2023
    3 projects | /r/cpp | 4 Jul 2023
    Apache Arrow
  • Wheel fails for pyarrow installation
    1 project | /r/learnpython | 16 Jun 2023
    I am aware of the fact that there are other posts about this issue but none of the ideas to solve it worked for me or sometimes none were found. The issue was discussed in the wheel git hub last December and seems to be solved but then it seems like I'm installing the wrong version? I simply used pip3 install pyarrow, is that wrong?