Apache Spark VS Scalding

Compare Apache Spark vs Scalding and see what are their differences.

Apache Spark

Apache Spark - A unified analytics engine for large-scale data processing (by apache)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
Apache Spark Scalding
101 -
38,249 3,469
1.0% 0.1%
10.0 2.5
6 days ago 11 months ago
Scala Scala
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Apache Spark

Posts with mentions or reviews of Apache Spark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-11.

Scalding

Posts with mentions or reviews of Scalding. We have used some of these posts to build our list of alternatives and similar projects.

We haven't tracked posts mentioning Scalding yet.
Tracking mentions began in Dec 2020.

What are some alternatives?

When comparing Apache Spark and Scalding you can also consider the following projects:

Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

Deeplearning4j - Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learning using automatic differentiation.

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

spark-deployer - Deploy Spark cluster in an easy way.

Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows

Scrunch - Mirror of Apache Crunch (Incubating)

mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services

Apache Flink - Apache Flink

luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

Scio - A Scala API for Apache Beam and Google Cloud Dataflow.

Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing

GridScale - Scala library for accessing various file, batch systems, job schedulers and grid middlewares.