hudi VS Apache Arrow

Compare hudi vs Apache Arrow and see what are their differences.

Apache Arrow

Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing (by apache)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
hudi Apache Arrow
20 75
5,001 13,338
2.0% 2.0%
9.9 10.0
6 days ago 5 days ago
Java C++
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

hudi

Posts with mentions or reviews of hudi. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-18.
  • Getting Started with Flink SQL, Apache Iceberg and DynamoDB Catalog
    4 projects | dev.to | 18 Dec 2023
    Apache Iceberg is one of the three types of lakehouse, the other two are Apache Hudi and Delta Lake.
  • The "Big Three's" Data Storage Offerings
    2 projects | /r/dataengineering | 15 Jun 2023
    Structured, Semi-structured and Unstructured can be stored in one single format, a lakehouse storage format like Delta, Iceberg or Hudi (assuming those don't require low-latency SLAs like subsecond).
  • Data-eng related highlights from the latest Thoughtworks Tech Radar
    3 projects | /r/dataengineering | 26 Apr 2023
    Apache Hudi
  • How-to-Guide: Contributing to Open Source
    19 projects | /r/dataengineering | 11 Jun 2022
    Apache Hudi
  • 4 best opensource projects about big data you should try out
    4 projects | dev.to | 24 Mar 2022
    1.Hudi
  • How Does The Data Lakehouse Enhance The Customer Data Stack?
    3 projects | dev.to | 31 Jan 2022
    A Lakehouse is an architecture that builds on top of the data lake concept and enhances it with functionality commonly found in database systems. The limitations of the data lake led to the emergence of a number of technologies including Apache Iceberg and Apache Hudi. These technologies define a Table Format on top of storage formats like ORC and Parquet on which additional functionality like transactions can be built.
  • SCD type 2 in spark
    2 projects | /r/dataengineering | 15 Oct 2021
    Use Hudi Or Delta Lake
  • Would ParquetWriter from pyarrow automatically flush?
    4 projects | /r/learnpython | 11 Sep 2021
  • Apache Hudi - The Streaming Data Lake Platform
    8 projects | dev.to | 27 Jul 2021
    But first, we needed to tackle the basics - transactions and mutability - on the data lake. In many ways, Apache Hudi pioneered the transactional data lake movement as we know it today. Specifically, during a time when more special-purpose systems were being born, Hudi introduced a server-less, transaction layer, which worked over the general-purpose Hadoop FileSystem abstraction on Cloud Stores/HDFS. This model helped Hudi to scale writers/readers to 1000s of cores on day one, compared to warehouses which offer a richer set of transactional guarantees but are often bottlenecked by the 10s of servers that need to handle them. We also experience a lot of joy to see similar systems (Delta Lake for e.g) later adopt the same server-less transaction layer model that we originally shared way back in early '17. We consciously introduced two table types Copy On Write (with simpler operability) and Merge On Read (for greater flexibility) and now these terms are used in projects outside Hudi, to refer to similar ideas being borrowed from Hudi. Through open sourcing and graduating from the Apache Incubator, we have made some great progress elevating these ideas across the industry, as well as bringing them to life with a cohesive software stack. Given the exciting developments in the past year or so that have propelled data lakes further mainstream, we thought some perspective can help users see Hudi with the right lens, appreciate what it stands for, and be a part of where it’s headed. At this time, we also wanted to shine some light on all the great work done by 180+ contributors on the project, working with more than 2000 unique users over slack/github/jira, contributing all the different capabilities Hudi has gained over the past years, from its humble beginnings.

Apache Arrow

Posts with mentions or reviews of Apache Arrow. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-05.
  • How moving from Pandas to Polars made me write better code without writing better code
    2 projects | dev.to | 5 Mar 2024
    In comes Polars: a brand new dataframe library, or how the author Ritchie Vink describes it... a query engine with a dataframe frontend. Polars is built on top of the Arrow memory format and is written in Rust, which is a modern performant and memory-safe systems programming language similar to C/C++.
  • From slow to SIMD: A Go optimization story
    10 projects | news.ycombinator.com | 23 Jan 2024
    I learned yesterday about GoLang's assembler https://go.dev/doc/asm - after browsing how arrow is implemented for different languages (my experience is mainly C/C++) - https://github.com/apache/arrow/tree/main/go/arrow/math - there are bunch of .S ("asm" files) and I'm still not able to comprehend how these work exactly (I guess it'll take more reading) - it seems very peculiar.

    The last time I've used inlined assembly was back in Turbo/Borland Pascal, then bit in Visual Studio (32-bit), until they got disabled. Then did very little gcc with their more strict specification (while the former you had to know how the ABI worked, the latter too - but it was specced out).

    Anyway - I wasn't expecting to find this in "Go" :) But I guess you can always start with .go code then produce assembly (-S) then optimize it, or find/hire someone to do it.

  • Time Series Analysis with Polars
    2 projects | dev.to | 10 Dec 2023
    One is related to the heritage of being built around the NumPy library, which is great for processing numerical data, but becomes an issue as soon as the data is anything else. Pandas 2.0 has started to bring in Arrow, but it's not yet the standard (you have to opt-in and according to the developers it's going to stay that way for the foreseeable future). Also, pandas's Arrow-based features are not yet entirely on par with its NumPy-based features. Polars was built around Arrow from the get go. This makes it very powerful when it comes to exchanging data with other languages and reducing the number of in-memory copying operations, thus leading to better performance.
  • TXR Lisp
    2 projects | news.ycombinator.com | 8 Dec 2023
    IMO a good first step would be to use the txr FFI to write a library for Apache arrow: https://arrow.apache.org/
  • 3D desktop Game Engine scriptable in Python
    5 projects | news.ycombinator.com | 1 Nov 2023
    https://www.reddit.com/r/O3DE/comments/rdvxhx/why_python/ :

    > Python is used for scripting the editor only, not in-game behaviors.

    > For implementing entity behaviors the only out of box ways are C++, ScriptCanvas (visual scripting) or Lua. Python is currently not available for implementing game logic.

    C++, Lua, and Python all implement CFFI (C Foreign Function Interface) for remote function and method calls.

    "Using CFFI for embedding" https://cffi.readthedocs.io/en/latest/embedding.html :

    > You can use CFFI to generate C code which exports the API of your choice to any C application that wants to link with this C code. This API, which you define yourself, ends up as the API of a .so/.dll/.dylib library—or you can statically link it within a larger application.

    Apache Arrow already supports C, C++, Python, Rust, Go and has C GLib support Lua:

    https://github.com/apache/arrow/tree/main/c_glib/example/lua :

    > Arrow Lua example: All example codes use LGI to use Arrow GLib based bindings

    pyarrow.from_numpy_dtype:

  • Show HN: Udsv.js – A faster CSV parser in 5KB (min)
    3 projects | news.ycombinator.com | 4 Sep 2023
  • Interacting with Amazon S3 using AWS Data Wrangler (awswrangler) SDK for Pandas: A Comprehensive Guide
    5 projects | dev.to | 20 Aug 2023
    AWS Data Wrangler is a Python library that simplifies the process of interacting with various AWS services, built on top of some useful data tools and open-source projects such as Pandas, Apache Arrow and Boto3. It offers streamlined functions to connect to, retrieve, transform, and load data from AWS services, with a strong focus on Amazon S3.
  • Cap'n Proto 1.0
    10 projects | news.ycombinator.com | 28 Jul 2023
    Worker should really adopt Apache Arrow, which has a much bigger ecosystem.

    https://github.com/apache/arrow

  • C++ Jobs - Q3 2023
    3 projects | /r/cpp | 4 Jul 2023
    Apache Arrow
  • CSV or Parquet File Format
    3 projects | /r/Python | 1 Jun 2023
    In fact I have asked Apache Github how to read select column of particular row group of a parquet file. https://github.com/apache/arrow/issues/35688

What are some alternatives?

When comparing hudi and Apache Arrow you can also consider the following projects:

iceberg - Apache Iceberg

Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows

h5py - HDF5 for Python -- The h5py package is a Pythonic interface to the HDF5 binary data format.

Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing

FlatBuffers - FlatBuffers: Memory Efficient Serialization Library

polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust

ClickHouse - ClickHouse® is a free analytics DBMS for big data

kudu - Mirror of Apache Kudu

beam - Apache Beam is a unified programming model for Batch and Streaming data processing.

Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

ta-lib-python - Python wrapper for TA-Lib (http://ta-lib.org/).

debezium - Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.