Weka VS Apache Spark

Compare Weka vs Apache Spark and see what are their differences.

Weka

By bnjmn

Apache Spark

Apache Spark - A unified analytics engine for large-scale data processing (by apache)
Our great sponsors
  • Scout APM - Less time debugging, more time building
  • SonarQube - Static code analysis for 29 languages.
  • OPS - Build and Run Open Source Unikernels
Weka Apache Spark
0 29
302 31,885
- 1.5%
0.0 10.0
almost 3 years ago 4 days ago
PostScript Scala
- Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Weka

Posts with mentions or reviews of Weka. We have used some of these posts to build our list of alternatives and similar projects.

We haven't tracked posts mentioning Weka yet.
Tracking mentions began in Dec 2020.

Apache Spark

Posts with mentions or reviews of Apache Spark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-12-22.
  • Spark for beginners - and you
    3 projects | dev.to | 22 Dec 2021
    Spark
  • Jinja2 not formatting my text correctly. Any advice?
    11 projects | reddit.com/r/learnpython | 10 Dec 2021
    ListItem(name='Apache Spark', website='https://spark.apache.org/', category='Batch Processing', short_description='Apache Spark is an open-source unified analytics engine for large-scale data processing. Spark provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.'),
  • Is the pandas API (formerly Koalas) fully compatible with vanilla pandas?
    1 project | reddit.com/r/apachespark | 8 Dec 2021
  • Dreaming and Breaking Molds – Establishing Best Practices with Scott Haines
    3 projects | dev.to | 8 Dec 2021
    For example, when I was at Yahoo, we did a lot of things where we had the ability to basically process data in stream. But we didn't have repeatable libraries we could easily use. So we had to invent everything. So it was like, oh, we want to create a session. So somebody starts a user journey, where do they go within a journey? And is it all within a 15 to 30-minute timeout from the last event? How do we understand how people are using something or interacting with it? And those types of things are a lot more difficult than when we're like oh, we could do it like X, Y, or Z. And that stuff was just for free when we started using Spark.
  • Show HN: Box – Data Transformation Pipelines in Rust DataFusion
    4 projects | news.ycombinator.com | 30 Nov 2021
    A while ago I posted a link to [Arc](https://news.ycombinator.com/item?id=26573930) a declarative method for defining repeatable data pipelines which execute against [Apache Spark](https://spark.apache.org/).

    Today I would like to present a proof-of-concept implementation of the [Arc declarative ETL framework](https://arc.tripl.ai) against [Apache Datafusion](https://arrow.apache.org/datafusion/) which is an Ansi SQL (Postgres) execution engine based upon Apache Arrow and built with Rust.

    The idea of providing a declarative 'configuration' language for defining data pipelines was planned from the beginning of the Arc project to allow changing execution engines without having to rewrite the base business logic (the part that is valuable to your business). Instead, by defining an abstraction layer, we can change the execution engine and run the same logic with different execution characteristics.

    The benefit of the DataFusion over Apache Spark is a significant increase in speed and reduction in execution resource requirements. Even through a Docker-for-Mac inefficiency layer the same job completes in ~4 seconds with DataFusion vs ~24 seconds with Apache Spark (including JVM startup time). Without Docker-for-Mac layer end-to-end execution times of 0.5 second for the same example job (TPC-H) is possible. * the aim is not to start a benchmarking flamewar but to provide some indicative data *.

    The purpose of this post is to gather feedback from the community whether you would use a tool like this, what features would be required for you to use it (MVP) or whether you would be interested in contributing to the project. I would also like to highlight the excellent work being done by the DataFusion/Arrow (and Apache) community for providing such amazing tools to us all as open source projects.

  • Technology Advice
    1 project | reddit.com/r/dataengineering | 3 Nov 2021
    Have a look at Apache Spark
  • Spark is lit once again
    6 projects | dev.to | 29 Oct 2021
    Here at Exacaster Spark applications have been used extensively for years. We started using them on our Hadoop clusters with YARN as an application manager. However, with our recent product, we started moving towards a Cloud-based solution and decided to use Kubernetes for our infrastructure needs.
  • What is B2D Sector?
    12 projects | dev.to | 17 Oct 2021
    Example tools:\ Tensorflow, Tableau, Apache Spark, Matlab, Jupyter
  • Why should I invest in raptoreum? What makes it different
    1 project | reddit.com/r/raptoreum | 25 Sep 2021
    For your first question, if you are interested I encourage you to read the smart contracts paper here: https://docs.raptoreum.com/_media/Raptoreum_Contracts_EN.pdf and then to dig into what Apache Spark can do here: https://spark.apache.org/
  • How to use Spark and Pandas to prepare big data
    3 projects | dev.to | 21 Sep 2021
    Apache Spark is one of the most actively developed open-source projects in big data. The following code examples require that you have Spark set up and can execute Python code using the PySpark library. The examples also require that you have your data in Amazon S3 (Simple Storage Service). All this is set up on AWS EMR (Elastic MapReduce).

What are some alternatives?

When comparing Weka and Apache Spark you can also consider the following projects:

Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

Deeplearning4j - Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learning using automatic differentiation.

Scalding - A Scala API for Cascading

luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services

Smile - Statistical Machine Intelligence & Learning Engine

Apache Hadoop - Apache Hadoop

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

Apache Calcite - Apache Calcite

Scio - A Scala API for Apache Beam and Google Cloud Dataflow.

H2O - Sparkling Water provides H2O functionality inside Spark cluster