arc

Arc is an opinionated framework for defining data pipelines which are predictable, repeatable and manageable. (by tripl-ai)

Arc Alternatives

Similar projects and alternatives to arc

  • Apache Spark

    Apache Spark - A unified analytics engine for large-scale data processing

  • db-benchmark

    reproducible benchmark of database-like ops

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • Apache Arrow

    Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing

  • datafusion

    Apache DataFusion SQL Query Engine

  • anarki

    22 arc VS anarki

    Community-managed fork of the Arc dialect of Lisp; for commit privileges submit a pull request.

  • docker

    1 arc VS docker

    These are the official Dockerfiles for https://github.com/orgs/tripl-ai/packages (by tripl-ai)

  • box

    2 arc VS box

    An experimental implementation of Arc against Apache Datafusion (by tripl-ai)

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better arc alternative or higher similarity.

arc reviews and mentions

Posts with mentions or reviews of arc. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-11-30.
  • Show HN: Box – Data Transformation Pipelines in Rust DataFusion
    4 projects | news.ycombinator.com | 30 Nov 2021
    A while ago I posted a link to [Arc](https://news.ycombinator.com/item?id=26573930) a declarative method for defining repeatable data pipelines which execute against [Apache Spark](https://spark.apache.org/).

    Today I would like to present a proof-of-concept implementation of the [Arc declarative ETL framework](https://arc.tripl.ai) against [Apache Datafusion](https://arrow.apache.org/datafusion/) which is an Ansi SQL (Postgres) execution engine based upon Apache Arrow and built with Rust.

    The idea of providing a declarative 'configuration' language for defining data pipelines was planned from the beginning of the Arc project to allow changing execution engines without having to rewrite the base business logic (the part that is valuable to your business). Instead, by defining an abstraction layer, we can change the execution engine and run the same logic with different execution characteristics.

    The benefit of the DataFusion over Apache Spark is a significant increase in speed and reduction in execution resource requirements. Even through a Docker-for-Mac inefficiency layer the same job completes in ~4 seconds with DataFusion vs ~24 seconds with Apache Spark (including JVM startup time). Without Docker-for-Mac layer end-to-end execution times of 0.5 second for the same example job (TPC-H) is possible. * the aim is not to start a benchmarking flamewar but to provide some indicative data *.

    The purpose of this post is to gather feedback from the community whether you would use a tool like this, what features would be required for you to use it (MVP) or whether you would be interested in contributing to the project. I would also like to highlight the excellent work being done by the DataFusion/Arrow (and Apache) community for providing such amazing tools to us all as open source projects.

  • Apache Arrow Datafusion 5.0.0 release
    6 projects | news.ycombinator.com | 24 Aug 2021
    Disclosure: I am a contributor to Datafusion.

    I have done a lot of work in the ETL space in Apache Spark to build Arc (https://arc.tripl.ai/) and have ported a lot of the basic functionality of Arc to Datafusion as a proof-of-concept. The appeal to me of the Apache Spark and Datafusion engines is the ability to a) seperate compute and storage b) express transformation logic in SQL.

    Performance: From those early experiments Datafusion would frequently finish processing an entire job _before_ the SparkContext could be started - even on a local Spark instance. Obviously this is at smaller data sizes but in my experience a lot of ETL is about repeatable processes not necessarily huge datasets.

    Compatibility: Those experiments were done a few months ago and the SQL compatibility of the Datafusion engine has improved extremely rapidly (WINDOW functions were recently added). There is still some missing SQL functionality (for example to run all the TPC-H queries https://github.com/apache/arrow-datafusion/tree/master/bench...) but it is moving quickly.

  • Arc - an opinionated framework for defining data pipelines which are predictable, repeatable and manageable.
    1 project | /r/bigdata | 25 Mar 2021
    1 project | /r/coding | 25 Mar 2021
    1 project | /r/programming | 25 Mar 2021
    2 projects | /r/functionalprogramming | 25 Mar 2021
    1 project | /r/dataengineering | 25 Mar 2021
    1 project | /r/scala | 25 Mar 2021
    1 project | /r/coolgithubprojects | 25 Mar 2021
    1 project | /r/opensource | 25 Mar 2021
  • A note from our sponsor - SaaSHub
    www.saashub.com | 26 Apr 2024
    SaaSHub helps you find the best software and product alternatives Learn more →

Stats

Basic arc repo stats
14
166
5.3
3 months ago

tripl-ai/arc is an open source project licensed under MIT License which is an OSI approved license.

The primary programming language of arc is Scala.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com