dbt-spark VS delta

Compare dbt-spark vs delta and see what are their differences.

dbt-spark

dbt-spark contains all of the code enabling dbt to work with Apache Spark and Databricks (by dbt-labs)

delta

An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs (by delta-io)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
dbt-spark delta
7 69
364 6,919
1.6% 1.7%
8.6 9.8
4 days ago 3 days ago
Python Scala
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

dbt-spark

Posts with mentions or reviews of dbt-spark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-12-31.
  • Trying Delta Lake at home
    5 projects | /r/dataengineering | 31 Dec 2022
    Spark + dbt => https://github.com/dbt-labs/dbt-spark/blob/main/docker-compose.yml
  • So now dbt is worth $4.2b! Yes, that's a "b" for billion.
    2 projects | /r/dataengineering | 25 Feb 2022
    So the idea is you land your data raw in a Delta bronze layer, and then use dbt models to propagate that data forward to silver, gold, do all of your data quality, etc. and all of the actual execution is happening on a Databricks SQL endpoint (or you can use the dbt-spark adapter and run your transfors as Spark on a cluster)
  • Show HN: SpotML – Managed ML Training on Cheap AWS/GCP Spot Instances
    6 projects | news.ycombinator.com | 3 Oct 2021
    Neat. Congratulations on the launch!

    Apart from the fact that it could deploy to both GCP and AWS, what does it do differently than AWS Batch [0]?

    When we had a similar problem, we ran jobs on spots with AWS Batch and it worked nicely enough.

    Some suggestions (for a later date):

    1. Add built-in support for Ray [1] (you'd essentially be then competing with Anyscale, which is a VC funded startup, just to contrast it with another comment on this thread) and dbt [2].

    2. Support deploying coin miners (might be good to widen the product's reach; and stand it up against the likes of consensys).

    3. Get in front of many cost optimisation consultants out there, like the Duckbill Group.

    If I may, where are you building this product from? And how many are on the team?

    Thanks.

    [0] https://aws.amazon.com/batch/use-cases/

    [1] https://ray.io/

    [2] https://getdbt.com/

  • Replacing Segment Computed & SQL Traits With dbt & RudderStack Warehouse Actions
    1 project | dev.to | 1 Oct 2021
    It will be helpful to set the stage, as no two technical stacks are the same and not all data warehouse platforms provide the same functionality. It's for the latter that we really like tools like dbt, and the sample files provided here should provide a good starting point for your specific use case. Our instance leverages the cloud version of dbt and connects to our Snowflake data warehouse, where models output tables in a designated dbt schema.
  • Your default tool for ETL
    4 projects | /r/dataengineering | 30 Sep 2021
    T: SQL - views and scheduled queries in BigQuery; planning to go hard with dbt as soon as I can find some breathing room)
  • 7 Alternatives to Using Segment
    2 projects | dev.to | 29 Sep 2021
    Since all of the data is often already in the data warehouse, the logical choice is to simply just use it as a CDP. A modern data stack should consist of an end-to-end flow from data acquisition, collection, and transformation. In most cases, the easiest way to enable this goal is by leveraging tools that are purposely designed to handle a single task. Fivetran, Snowflake, and dbt are great examples of this. In fact, this is the core technology stack that every data-driven company is adopting. Fivetran handles the entire data integration aspect providing a simple SaaS solution that helps businesses quickly move data out of their SaaS tools and into their data warehouse. Snowflake provides an easy way for organizations to consolidate their data into one location for analytics purposes. Lastly, dbt provides a simple transformation tool that is SQL-based, enabling users to create data models that can be reused. These three solutions combined create an effective data management platform.
  • Dbt with Databricks and Delta Lake?
    1 project | /r/dataengineering | 25 Aug 2021
    This is the issue: https://github.com/dbt-labs/dbt-spark/issues/161. Too bad they still haven't fixed it!

delta

Posts with mentions or reviews of delta. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-19.
  • Delta Lake vs. Parquet: A Comparison
    2 projects | news.ycombinator.com | 19 Jan 2024
    Delta is pretty great, let's you do upserts into tables in DataBricks much easier than without it.

    I think the website is here: https://delta.io

  • Understanding Parquet, Iceberg and Data Lakehouses
    4 projects | news.ycombinator.com | 29 Dec 2023
    I often hear references to Apache Iceberg and Delta Lake as if they’re two peas in the Open Table Formats pod. Yet…

    Here’s the Apache Iceberg table format specification:

    https://iceberg.apache.org/spec/

    As they like to say in patent law, anyone “skilled in the art” of database systems could use this to build and query Iceberg tables without too much difficulty.

    This is nominally the Delta Lake equivalent:

    https://github.com/delta-io/delta/blob/master/PROTOCOL.md

    I defy anyone to even scope out what level of effort would be required to fully implement the current spec, let alone what would be involved in keeping up to date as this beast evolves.

    Frankly, the Delta Lake spec reads like a reverse engineering of whatever implementation tradeoffs Databricks is making as they race to build out a lakehouse for every Fortune 1000 company burned by Hadoop (which is to say, most of them).

    My point is that I’ve yet to be convinced that buying into Delta Lake is actually buying into an open ecosystem. Would appreciate any reassurance on this front!

  • Getting Started with Flink SQL, Apache Iceberg and DynamoDB Catalog
    4 projects | dev.to | 18 Dec 2023
    Apache Iceberg is one of the three types of lakehouse, the other two are Apache Hudi and Delta Lake.
  • [D] Is there other better data format for LLM to generate structured data?
    1 project | /r/MachineLearning | 10 Dec 2023
    The Apache Spark / Databricks community prefers Apache parquet or Linux Fundation's delta.io over json.
  • Delta vs Iceberg: make love not war
    1 project | /r/MicrosoftFabric | 30 Jun 2023
    Delta 3.0 extends an olive branch. https://github.com/delta-io/delta/releases/tag/v3.0.0rc1
  • Databricks Strikes $1.3B Deal for Generative AI Startup MosaicML
    4 projects | news.ycombinator.com | 26 Jun 2023
    Databricks provides Jupyter lab like notebooks for analysis and ETL pipelines using spark through pyspark, sparkql or scala. I think R is supported as well but it doesn't interop as well with their newer features as well as python and SQL do. It interfaces with cloud storage backend like S3 and offers some improvements to the parquet format of data querying that allows for updating, ordering and merged through https://delta.io . They integrate pretty seamlessly to other data visualisation tooling if you want to use it for that but their built in graphs are fine for most cases. They also have ML on rails type through menus and models if I recall but I typically don't use it for that. I've typically used it for ETL or ELT type workflows for data that's too big or isn't stored in a database.
  • The "Big Three's" Data Storage Offerings
    2 projects | /r/dataengineering | 15 Jun 2023
    Structured, Semi-structured and Unstructured can be stored in one single format, a lakehouse storage format like Delta, Iceberg or Hudi (assuming those don't require low-latency SLAs like subsecond).
  • Ideas/Suggestions around setting up a data pipeline from scratch
    3 projects | /r/dataengineering | 9 Jun 2023
    As the data source, what I have is a gRPC stream. I get data in protobuf encoded format from it. This is a fixed part in the overall system, there is no other way to extract the data. We plan to ingest this data in delta lake, but before we do that there are a few problems.
  • Medallion/lakehouse architecture data modelling
    1 project | /r/dataengineering | 3 Jun 2023
    Take a look at Delta Lake https://delta.io, it enables a lot of database-like actions on files
  • CSV or Parquet File Format
    3 projects | /r/Python | 1 Jun 2023
    I prefer parquet (or delta for larger datasets. CSV for very small datasets, or the ones that will be later used/edited in Excel or Googke sheets.

What are some alternatives?

When comparing dbt-spark and delta you can also consider the following projects:

dbt-databricks - A dbt adapter for Databricks.

dvc - 🦉 ML Experiments and Data Management with Git

rudderstack-docs - Documentation repository for RudderStack - the Customer Data Platform for Developers.

Apache Cassandra - Mirror of Apache Cassandra

Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.

lakeFS - lakeFS - Data version control for your data lake | Git for data

damons-data-lake - All the code related to building my own data lake

hudi - Upserts, Deletes And Incremental Processing on Big Data.

cargo-crates - An easy way to build data extractors in Docker.

delta-rs - A native Rust library for Delta Lake, with bindings into Python

airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.

iceberg - Apache Iceberg