Apache Spark VS Apache Flink

Compare Apache Spark vs Apache Flink and see what are their differences.

Apache Spark

Apache Spark - A unified analytics engine for large-scale data processing (by apache)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
Apache Spark Apache Flink
101 9
38,104 23,039
1.1% 1.1%
10.0 9.9
5 days ago about 10 hours ago
Scala Java
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Apache Spark

Posts with mentions or reviews of Apache Spark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-11.

Apache Flink

Posts with mentions or reviews of Apache Flink. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-15.
  • First 15 Open Source Advent projects
    16 projects | dev.to | 15 Dec 2023
    7. Apache Flink | Github | tutorial
  • I keep getting build failure when I try to run mvn clean compile package
    2 projects | /r/AskProgramming | 8 Apr 2023
    I'm trying to use https://github.com/mauricioaniche/ck to analyze the ck metrics of https://github.com/apache/flink. I have the latest version of java downloaded and I have the latest version of apache maven downloaded too. My environment variables are set correctly. I'm in the correct directory as well. However, when I run mvn clean compile package in powershell it always says build error. I've tried looking up the errors but there's so many. https://imgur.com/a/Zk8Snsa I'm very new to programming in general so any suggestions would be appreciated.
  • We Are Changing the License for Akka
    6 projects | news.ycombinator.com | 7 Sep 2022
  • DeWitt Clause, or Can You Benchmark %DATABASE% and Get Away With It
    21 projects | dev.to | 2 Jun 2022
    Apache Drill, Druid, Flink, Hive, Kafka, Spark
  • Computation reuse via fusion in Amazon Athena
    2 projects | news.ycombinator.com | 20 May 2022
    It took me some time to get a good grasp of the power of SQL; and it really kicked in when I learned about optimization rules. It's a program that you rewrite, just like an optimizing compiler would.

    You state what you want; you have different ways to fetch and match and massage data; and you can search through this space to produce a physical plan. Hopefully you used knowledge to weight parts to be optimized (table statistics, like Java's JIT would detect hot spots).

    I find it fascinating to peer through database code to see what is going on. Lately, there's been new advances towards streaming databases, which bring a whole new design space. For example, now you have latency of individual new rows to optimize for, as opposed to batch it whole to optimize the latency of a dataset. Batch scanning will be benefit from better use of your CPU caches.

    And maybe you could have a hybrid system which reads history from a log and aggregates in a batched manner, and then switches to another execution plan when it reaches the end of the log.

    If you want to have a peek at that here are Flink's set of rules [1], generic and stream-specific ones. The names can be cryptic, but usually give a good sense of what is going on. For example: PushFilterIntoTableSourceScanRule makes the WHERE clause apply the earliest possible, to save some CPU/network bandwidth further down. PushPartitionIntoTableSourceScanRule tries to make a fan-out/shuffle happen the earliest possible, so that parallelism can be made use of.

    [1] https://github.com/apache/flink/blob/5f8fb304fb5d68cdb0b3e3c...

  • Avro SpecificRecord File Sink using apache flink is not compiling due to error incompatible types: FileSink<?> cannot be converted to SinkFunction<?>
    3 projects | /r/apacheflink | 14 Sep 2021
    [1]: https://mvnrepository.com/artifact/org.apache.avro/avro-maven-plugin/1.8.2 [2]: https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/sink/FileSink.java [3]: https://ci.apache.org/projects/flink/flink-docs-master/docs/connectors/datastream/file_sink/ [4]: https://github.com/apache/flink/blob/c81b831d5fe08d328251d91f4f255b1508a9feb4/flink-end-to-end-tests/flink-file-sink-test/src/main/java/FileSinkProgram.java [5]: https://github.com/rajcspsg/streaming-file-sink-demo

What are some alternatives?

When comparing Apache Spark and Apache Flink you can also consider the following projects:

Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows

Scalding - A Scala API for Cascading

mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services

luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing

Weka

Smile - Statistical Machine Intelligence & Learning Engine

Apache Calcite - Apache Calcite

Scio - A Scala API for Apache Beam and Google Cloud Dataflow.