delta VS Airflow

Compare delta vs Airflow and see what are their differences.

delta

An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs (by delta-io)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
delta Airflow
69 169
6,782 33,953
1.9% 2.2%
9.8 10.0
5 days ago 7 days ago
Scala Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

delta

Posts with mentions or reviews of delta. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-19.
  • Delta Lake vs. Parquet: A Comparison
    2 projects | news.ycombinator.com | 19 Jan 2024
    Delta is pretty great, let's you do upserts into tables in DataBricks much easier than without it.

    I think the website is here: https://delta.io

  • Understanding Parquet, Iceberg and Data Lakehouses
    4 projects | news.ycombinator.com | 29 Dec 2023
    I often hear references to Apache Iceberg and Delta Lake as if they’re two peas in the Open Table Formats pod. Yet…

    Here’s the Apache Iceberg table format specification:

    https://iceberg.apache.org/spec/

    As they like to say in patent law, anyone “skilled in the art” of database systems could use this to build and query Iceberg tables without too much difficulty.

    This is nominally the Delta Lake equivalent:

    https://github.com/delta-io/delta/blob/master/PROTOCOL.md

    I defy anyone to even scope out what level of effort would be required to fully implement the current spec, let alone what would be involved in keeping up to date as this beast evolves.

    Frankly, the Delta Lake spec reads like a reverse engineering of whatever implementation tradeoffs Databricks is making as they race to build out a lakehouse for every Fortune 1000 company burned by Hadoop (which is to say, most of them).

    My point is that I’ve yet to be convinced that buying into Delta Lake is actually buying into an open ecosystem. Would appreciate any reassurance on this front!

  • Getting Started with Flink SQL, Apache Iceberg and DynamoDB Catalog
    4 projects | dev.to | 18 Dec 2023
    Apache Iceberg is one of the three types of lakehouse, the other two are Apache Hudi and Delta Lake.
  • Databricks Strikes $1.3B Deal for Generative AI Startup MosaicML
    4 projects | news.ycombinator.com | 26 Jun 2023
    Databricks provides Jupyter lab like notebooks for analysis and ETL pipelines using spark through pyspark, sparkql or scala. I think R is supported as well but it doesn't interop as well with their newer features as well as python and SQL do. It interfaces with cloud storage backend like S3 and offers some improvements to the parquet format of data querying that allows for updating, ordering and merged through https://delta.io . They integrate pretty seamlessly to other data visualisation tooling if you want to use it for that but their built in graphs are fine for most cases. They also have ML on rails type through menus and models if I recall but I typically don't use it for that. I've typically used it for ETL or ELT type workflows for data that's too big or isn't stored in a database.
  • The "Big Three's" Data Storage Offerings
    2 projects | /r/dataengineering | 15 Jun 2023
    Structured, Semi-structured and Unstructured can be stored in one single format, a lakehouse storage format like Delta, Iceberg or Hudi (assuming those don't require low-latency SLAs like subsecond).
  • Ideas/Suggestions around setting up a data pipeline from scratch
    3 projects | /r/dataengineering | 9 Jun 2023
    As the data source, what I have is a gRPC stream. I get data in protobuf encoded format from it. This is a fixed part in the overall system, there is no other way to extract the data. We plan to ingest this data in delta lake, but before we do that there are a few problems.
  • CSV or Parquet File Format
    3 projects | /r/Python | 1 Jun 2023
    I prefer parquet (or delta for larger datasets. CSV for very small datasets, or the ones that will be later used/edited in Excel or Googke sheets.
  • How to build a data pipeline using Delta Lake
    2 projects | dev.to | 19 May 2023
    This sounds like a new trending destination to take selfies in front of, but it’s even better than that. Delta Lake is an “open-source storage layer designed to run on top of an existing data lake and improve its reliability, security, and performance.” (source). It let’s you interact with an object storage system like you would with a database.
  • Delta.io/deltalake self hosting
    2 projects | /r/bigdata | 26 Apr 2023
    I mean the different between using the delta.io framework to let it run on your own machines/ vms vs using databricks and have clusters defined.
    2 projects | /r/bigdata | 26 Apr 2023
    You are right, delta.io is just a framework. Sorry for the unclear question. Another try: when you host spark on your own with delta as table format compared to usage of Databricks, what are the differences?

Airflow

Posts with mentions or reviews of Airflow. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-07.

What are some alternatives?

When comparing delta and Airflow you can also consider the following projects:

Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.

dagster - An orchestration platform for the development, production, and observation of data assets.

n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.

luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing

Dask - Parallel computing with task scheduling

Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more

Apache Camel - Apache Camel is an open source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data.

airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.

Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing

argo - Workflow Engine for Kubernetes

Cronicle - A simple, distributed task scheduler and runner with a web based UI.