BigDL VS Apache Spark

Compare BigDL vs Apache Spark and see what are their differences.

BigDL

Building Large-Scale AI Applications for Distributed Big Data (by intel-analytics)

Apache Spark

Apache Spark - A unified analytics engine for large-scale data processing (by apache)
Our great sponsors
  • Nanos - Run Linux Software Faster and Safer than Linux with Unikernels
  • Scout APM - A developer's best friend. Try free for 14-days
  • SaaSHub - Software Alternatives and Reviews
BigDL Apache Spark
1 25
3,802 31,485
0.3% 1.0%
10.0 10.0
8 days ago 1 day ago
Jupyter Notebook Scala
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

BigDL

Posts with mentions or reviews of BigDL. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-04-05.

Apache Spark

Posts with mentions or reviews of Apache Spark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-11-30.
  • Show HN: Box – Data Transformation Pipelines in Rust DataFusion
    4 projects | news.ycombinator.com | 30 Nov 2021
    A while ago I posted a link to [Arc](https://news.ycombinator.com/item?id=26573930) a declarative method for defining repeatable data pipelines which execute against [Apache Spark](https://spark.apache.org/).

    Today I would like to present a proof-of-concept implementation of the [Arc declarative ETL framework](https://arc.tripl.ai) against [Apache Datafusion](https://arrow.apache.org/datafusion/) which is an Ansi SQL (Postgres) execution engine based upon Apache Arrow and built with Rust.

    The idea of providing a declarative 'configuration' language for defining data pipelines was planned from the beginning of the Arc project to allow changing execution engines without having to rewrite the base business logic (the part that is valuable to your business). Instead, by defining an abstraction layer, we can change the execution engine and run the same logic with different execution characteristics.

    The benefit of the DataFusion over Apache Spark is a significant increase in speed and reduction in execution resource requirements. Even through a Docker-for-Mac inefficiency layer the same job completes in ~4 seconds with DataFusion vs ~24 seconds with Apache Spark (including JVM startup time). Without Docker-for-Mac layer end-to-end execution times of 0.5 second for the same example job (TPC-H) is possible. * the aim is not to start a benchmarking flamewar but to provide some indicative data *.

    The purpose of this post is to gather feedback from the community whether you would use a tool like this, what features would be required for you to use it (MVP) or whether you would be interested in contributing to the project. I would also like to highlight the excellent work being done by the DataFusion/Arrow (and Apache) community for providing such amazing tools to us all as open source projects.

  • Technology Advice
    1 project | reddit.com/r/dataengineering | 3 Nov 2021
    Have a look at Apache Spark
  • Spark is lit once again
    6 projects | dev.to | 29 Oct 2021
    Here at Exacaster Spark applications have been used extensively for years. We started using them on our Hadoop clusters with YARN as an application manager. However, with our recent product, we started moving towards a Cloud-based solution and decided to use Kubernetes for our infrastructure needs.
  • What is B2D Sector?
    13 projects | dev.to | 17 Oct 2021
    Example tools:\ Tensorflow, Tableau, Apache Spark, Matlab, Jupyter
  • Why should I invest in raptoreum? What makes it different
    1 project | reddit.com/r/raptoreum | 25 Sep 2021
    For your first question, if you are interested I encourage you to read the smart contracts paper here: https://docs.raptoreum.com/_media/Raptoreum_Contracts_EN.pdf and then to dig into what Apache Spark can do here: https://spark.apache.org/
  • How to use Spark and Pandas to prepare big data
    3 projects | dev.to | 21 Sep 2021
    Apache Spark is one of the most actively developed open-source projects in big data. The following code examples require that you have Spark set up and can execute Python code using the PySpark library. The examples also require that you have your data in Amazon S3 (Simple Storage Service). All this is set up on AWS EMR (Elastic MapReduce).
  • Google Colab, Pyspark, Cassandra remote cluster combine these all together
    2 projects | dev.to | 13 Sep 2021
    Spark
  • How to Run Spark SQL on Encrypted Data
    3 projects | dev.to | 10 Aug 2021
    For those of you who are new, Apache Spark is a popular distributed computing framework used by data scientists and engineers for processing large batches of data. One of its modules, Spark SQL, allows users to interact with structured, tabular data. This can be done through a DataSet/DataFrame API available in Scala or Python, or by using standard SQL queries. Here you can see a quick example of both below:
  • Machine Learning Tools and Algorithms
    3 projects | reddit.com/r/u_Snoo36930 | 29 Jul 2021
    Apache Spark :- A massive data processing engine with built-in modules for streaming, SQL, Machine Learning (ML), and graph processing, Apache Spark is recognized for being quick, simple to use, and general. It is also known for being fast, simple to use, and generic.
  • Strategies for running multiple Spark jobs simultaneously?
    1 project | reddit.com/r/apachespark | 25 Jul 2021

What are some alternatives?

When comparing BigDL and Apache Spark you can also consider the following projects:

Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

Scalding - A Scala API for Cascading

luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services

Smile - Statistical Machine Intelligence & Learning Engine

Weka

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

Scio - A Scala API for Apache Beam and Google Cloud Dataflow.

dpark - Python clone of Spark, a MapReduce alike framework in Python

Deeplearning4j - Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learning using automatic differentiation.

Summingbird - Streaming MapReduce with Scalding and Storm

Apache Calcite - Apache Calcite