Apache Arrow VS Apache Spark

Compare Apache Arrow vs Apache Spark and see what are their differences.

Apache Arrow

Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing (by apache)

Apache Spark

Apache Spark - A unified analytics engine for large-scale data processing (by apache)
Our great sponsors
  • Scout APM - Less time debugging, more time building
  • OPS - Build and Run Open Source Unikernels
  • SonarQube - Static code analysis for 29 languages.
Apache Arrow Apache Spark
29 30
9,007 31,940
3.2% 1.6%
10.0 10.0
1 day ago 1 day ago
C++ Scala
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Apache Arrow

Posts with mentions or reviews of Apache Arrow. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-12-16.
  • What is a library in another language that you wish would exist in Haskell
    1 project | reddit.com/r/haskell | 21 Jan 2022
    I took a deeper look and find Julia has gained great coverage of Arrow data standard: https://github.com/apache/arrow/tree/master/julia/Arrow
  • Awkward: Nested, jagged, differentiable, mixed type, GPU-enabled, JIT'd NumPy
    5 projects | news.ycombinator.com | 16 Dec 2021
    Hi! I'm the original author of Awkward Array (Jim Pivarski), though there are now many contributors with about five regulars. Two of my colleagues just pointed me here—I'm glad you're interested! I can answer any questions you have about it.

    First, sorry about all the TODOs in the documentation: I laid out a table of contents structure as a reminder to myself of what ought to be written, but haven't had a chance to fill in all of the topics. From the front page (https://awkward-array.org/), if you click through to the Python API reference (https://awkward-array.readthedocs.io/), that site is 100% filled in. Like NumPy, the library consists of one basic data type, `ak.Array`, and a suite of functions that act on it, `ak.this` and `ak.that`. All of those functions are individually documented, and many have examples.

    The basic idea starts with a data structure like Apache Arrow (https://arrow.apache.org/)—a tree of general, variable-length types, organized in memory as a collection of columnar arrays—but performs operations on the data without ever taking it out of its columnar form. (3.5 minute explanation here: https://youtu.be/2NxWpU7NArk?t=661) Those columnar operations are compiled (in C++); there's a core of structure-manipulation functions suggestively named "cpu-kernels" that will also be implemented in CUDA (some already have, but that's in an experimental stage).

    A key aspect of this is that structure can be manipulated just by changing values in some internal arrays and rearranging the single tree organizing those arrays. If, for instance, you want to replace a bunch of objects in variable-length lists with another structure, it never needs to instantiate those objects or lists as explicit types (e.g. `struct` or `std::vector`), and so the functions don't need to be compiled for specific data types. You can define any new data types at runtime and the same compiled functions apply. Therefore, JIT compilation is not necessary.

    We do have Numba extensions so that you can iterate over runtime-defined data types in JIT-compiled Numba, but that's a second way to manipulate the same data. By analogy with NumPy, you can compute many things using NumPy's precompiled functions, as long as you express your workflow in NumPy's vectorized way. Numba additionally allows you to express your workflow in imperative loops without losing performance. It's the same way with Awkward Array: unpacking a million record structures or slicing a million variable-length lists in a single function call makes use of some precompiled functions (no JIT), but iterating over them at scale with imperative for loops requires JIT-compilation in Numba.

    Just as we work with Numba to provide both of these programming styles—array-oriented and imperative—we'll also be working with JAX to add autodifferentiation (Anish Biswas will be starting on this in January; he's actually continuing work from last spring, but in a different direction). We're also working with Martin Durant and Doug Davis to replace our homegrown lazy arrays with industry-standard Dask, as a new collection type (https://github.com/ContinuumIO/dask-awkward/). A lot of my time, with Ianna Osborne and Ioana Ifrim at my university, is being spent refactoring the internals to make these kinds of integrations easier (https://indico.cern.ch/event/855454/contributions/4605044/). We found that we had implemented too much in C++ and need more, but not all, of the code to be in Python to be able to interact with third-party libraries.

    If you have any other questions, I'd be happy to answer them!

  • Test Parquet float16 Support in Pandas
    3 projects | dev.to | 14 Dec 2021
    https://github.com/apache/arrow/issues/2691 https://issues.apache.org/jira/browse/ARROW-7242 https://issues.apache.org/jira/browse/PARQUET-1647
  • Any role that Rust could have in the Data world (Big Data, Data Science, Machine learning, etc.)?
    8 projects | reddit.com/r/rust | 4 Dec 2021
    Yes https://arrow.apache.org/
  • pigeon-rs: Open source email automation written in Rust
    5 projects | reddit.com/r/rust | 20 Nov 2021
    Connectorx is using arrow2 data format for fetching from a database. This data format is optimized for columnar data [1]:
  • Introducing tidypolars - a Python data frame package for R tidyverse users
    9 projects | reddit.com/r/rstats | 10 Nov 2021
    I think having a basic understanding of pandas, given how broadly it's used, is beneficial. That being said, polars seems to be matching or beating data.table in performance, so I think it'd be very worth it to take it up. Wes McKinney, creator of pandas, has been quite vocal about architecture flaws of pandas -- which is why he's been working on the Arrow project. polars is based on Arrow, so in principle it's kinda like pandas 2.0 (adopting the changes that Wes proposed).
    9 projects | reddit.com/r/rstats | 10 Nov 2021
    So the question is really - how is polars so fast? Polars is packed by Apache Arrow, which is a columnar memory format that is designed specifically for performance.
  • Comparing SQLite, DuckDB and Arrow
    5 projects | news.ycombinator.com | 27 Oct 2021
  • The Data Engineer Roadmap 🗺
    11 projects | dev.to | 19 Oct 2021
    Apache Arrow
  • C++ Jobs - Q4 2021
    4 projects | reddit.com/r/cpp | 2 Oct 2021
    Technologies: Apache Arrow, Flatbuffers, C++ Actor Framework, Linux, Docker, Kubernetes

Apache Spark

Posts with mentions or reviews of Apache Spark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-12-22.
  • Will Julia become the main language for data processing in the next 2-3 years?
    1 project | reddit.com/r/dataengineering | 27 Jan 2022
    Where is your winner C++ code in PySpark?https://github.com/apache/spark/tree/master/python/pyspark
  • Spark for beginners - and you
    3 projects | dev.to | 22 Dec 2021
    Spark
  • Jinja2 not formatting my text correctly. Any advice?
    11 projects | reddit.com/r/learnpython | 10 Dec 2021
    ListItem(name='Apache Spark', website='https://spark.apache.org/', category='Batch Processing', short_description='Apache Spark is an open-source unified analytics engine for large-scale data processing. Spark provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.'),
  • Is the pandas API (formerly Koalas) fully compatible with vanilla pandas?
    1 project | reddit.com/r/apachespark | 8 Dec 2021
  • Dreaming and Breaking Molds – Establishing Best Practices with Scott Haines
    3 projects | dev.to | 8 Dec 2021
    For example, when I was at Yahoo, we did a lot of things where we had the ability to basically process data in stream. But we didn't have repeatable libraries we could easily use. So we had to invent everything. So it was like, oh, we want to create a session. So somebody starts a user journey, where do they go within a journey? And is it all within a 15 to 30-minute timeout from the last event? How do we understand how people are using something or interacting with it? And those types of things are a lot more difficult than when we're like oh, we could do it like X, Y, or Z. And that stuff was just for free when we started using Spark.
  • Show HN: Box – Data Transformation Pipelines in Rust DataFusion
    4 projects | news.ycombinator.com | 30 Nov 2021
    A while ago I posted a link to [Arc](https://news.ycombinator.com/item?id=26573930) a declarative method for defining repeatable data pipelines which execute against [Apache Spark](https://spark.apache.org/).

    Today I would like to present a proof-of-concept implementation of the [Arc declarative ETL framework](https://arc.tripl.ai) against [Apache Datafusion](https://arrow.apache.org/datafusion/) which is an Ansi SQL (Postgres) execution engine based upon Apache Arrow and built with Rust.

    The idea of providing a declarative 'configuration' language for defining data pipelines was planned from the beginning of the Arc project to allow changing execution engines without having to rewrite the base business logic (the part that is valuable to your business). Instead, by defining an abstraction layer, we can change the execution engine and run the same logic with different execution characteristics.

    The benefit of the DataFusion over Apache Spark is a significant increase in speed and reduction in execution resource requirements. Even through a Docker-for-Mac inefficiency layer the same job completes in ~4 seconds with DataFusion vs ~24 seconds with Apache Spark (including JVM startup time). Without Docker-for-Mac layer end-to-end execution times of 0.5 second for the same example job (TPC-H) is possible. * the aim is not to start a benchmarking flamewar but to provide some indicative data *.

    The purpose of this post is to gather feedback from the community whether you would use a tool like this, what features would be required for you to use it (MVP) or whether you would be interested in contributing to the project. I would also like to highlight the excellent work being done by the DataFusion/Arrow (and Apache) community for providing such amazing tools to us all as open source projects.

  • Technology Advice
    1 project | reddit.com/r/dataengineering | 3 Nov 2021
    Have a look at Apache Spark
  • Spark is lit once again
    6 projects | dev.to | 29 Oct 2021
    Here at Exacaster Spark applications have been used extensively for years. We started using them on our Hadoop clusters with YARN as an application manager. However, with our recent product, we started moving towards a Cloud-based solution and decided to use Kubernetes for our infrastructure needs.
  • What is B2D Sector?
    12 projects | dev.to | 17 Oct 2021
    Example tools:\ Tensorflow, Tableau, Apache Spark, Matlab, Jupyter
  • Why should I invest in raptoreum? What makes it different
    1 project | reddit.com/r/raptoreum | 25 Sep 2021
    For your first question, if you are interested I encourage you to read the smart contracts paper here: https://docs.raptoreum.com/_media/Raptoreum_Contracts_EN.pdf and then to dig into what Apache Spark can do here: https://spark.apache.org/

What are some alternatives?

When comparing Apache Arrow and Apache Spark you can also consider the following projects:

Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

Scalding - A Scala API for Cascading

luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services

Smile - Statistical Machine Intelligence & Learning Engine

Weka

h5py - HDF5 for Python -- The h5py package is a Pythonic interface to the HDF5 binary data format.

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

Apache Calcite - Apache Calcite

polars - Fast multi-threaded DataFrame library in Rust | Python | Node.js

Scio - A Scala API for Apache Beam and Google Cloud Dataflow.

Deeplearning4j - Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learning using automatic differentiation.