hudi VS dbt-core

Compare hudi vs dbt-core and see what are their differences.

dbt-core

dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications. (by dbt-labs)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
hudi dbt-core
20 86
5,001 8,718
2.0% 6.1%
9.9 9.7
6 days ago 3 days ago
Java Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

hudi

Posts with mentions or reviews of hudi. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-18.
  • Getting Started with Flink SQL, Apache Iceberg and DynamoDB Catalog
    4 projects | dev.to | 18 Dec 2023
    Apache Iceberg is one of the three types of lakehouse, the other two are Apache Hudi and Delta Lake.
  • The "Big Three's" Data Storage Offerings
    2 projects | /r/dataengineering | 15 Jun 2023
    Structured, Semi-structured and Unstructured can be stored in one single format, a lakehouse storage format like Delta, Iceberg or Hudi (assuming those don't require low-latency SLAs like subsecond).
  • Data-eng related highlights from the latest Thoughtworks Tech Radar
    3 projects | /r/dataengineering | 26 Apr 2023
    Apache Hudi
  • How-to-Guide: Contributing to Open Source
    19 projects | /r/dataengineering | 11 Jun 2022
    Apache Hudi
  • 4 best opensource projects about big data you should try out
    4 projects | dev.to | 24 Mar 2022
    1.Hudi
  • How Does The Data Lakehouse Enhance The Customer Data Stack?
    3 projects | dev.to | 31 Jan 2022
    A Lakehouse is an architecture that builds on top of the data lake concept and enhances it with functionality commonly found in database systems. The limitations of the data lake led to the emergence of a number of technologies including Apache Iceberg and Apache Hudi. These technologies define a Table Format on top of storage formats like ORC and Parquet on which additional functionality like transactions can be built.
  • SCD type 2 in spark
    2 projects | /r/dataengineering | 15 Oct 2021
    Use Hudi Or Delta Lake
  • Would ParquetWriter from pyarrow automatically flush?
    4 projects | /r/learnpython | 11 Sep 2021
  • Apache Hudi - The Streaming Data Lake Platform
    8 projects | dev.to | 27 Jul 2021
    But first, we needed to tackle the basics - transactions and mutability - on the data lake. In many ways, Apache Hudi pioneered the transactional data lake movement as we know it today. Specifically, during a time when more special-purpose systems were being born, Hudi introduced a server-less, transaction layer, which worked over the general-purpose Hadoop FileSystem abstraction on Cloud Stores/HDFS. This model helped Hudi to scale writers/readers to 1000s of cores on day one, compared to warehouses which offer a richer set of transactional guarantees but are often bottlenecked by the 10s of servers that need to handle them. We also experience a lot of joy to see similar systems (Delta Lake for e.g) later adopt the same server-less transaction layer model that we originally shared way back in early '17. We consciously introduced two table types Copy On Write (with simpler operability) and Merge On Read (for greater flexibility) and now these terms are used in projects outside Hudi, to refer to similar ideas being borrowed from Hudi. Through open sourcing and graduating from the Apache Incubator, we have made some great progress elevating these ideas across the industry, as well as bringing them to life with a cohesive software stack. Given the exciting developments in the past year or so that have propelled data lakes further mainstream, we thought some perspective can help users see Hudi with the right lens, appreciate what it stands for, and be a part of where it’s headed. At this time, we also wanted to shine some light on all the great work done by 180+ contributors on the project, working with more than 2000 unique users over slack/github/jira, contributing all the different capabilities Hudi has gained over the past years, from its humble beginnings.

dbt-core

Posts with mentions or reviews of dbt-core. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-16.

What are some alternatives?

When comparing hudi and dbt-core you can also consider the following projects:

iceberg - Apache Iceberg

kudu - Mirror of Apache Kudu

airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.

Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

debezium - Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.

pinot - Apache Pinot - A realtime distributed OLAP datastore

metricflow - MetricFlow allows you to define, build, and maintain metrics in code.

delta - An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs

Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows

n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.

Apache Avro - Apache Avro is a data serialization system.

dagster - An orchestration platform for the development, production, and observation of data assets.