dbt-spark VS airbyte

Compare dbt-spark vs airbyte and see what are their differences.

dbt-spark

dbt-spark contains all of the code enabling dbt to work with Apache Spark and Databricks (by dbt-labs)

airbyte

The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted. (by airbytehq)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
dbt-spark airbyte
7 139
364 14,054
1.6% 2.4%
8.6 10.0
6 days ago 4 days ago
Python Python
Apache License 2.0 GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

dbt-spark

Posts with mentions or reviews of dbt-spark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-12-31.
  • Trying Delta Lake at home
    5 projects | /r/dataengineering | 31 Dec 2022
    Spark + dbt => https://github.com/dbt-labs/dbt-spark/blob/main/docker-compose.yml
  • So now dbt is worth $4.2b! Yes, that's a "b" for billion.
    2 projects | /r/dataengineering | 25 Feb 2022
    So the idea is you land your data raw in a Delta bronze layer, and then use dbt models to propagate that data forward to silver, gold, do all of your data quality, etc. and all of the actual execution is happening on a Databricks SQL endpoint (or you can use the dbt-spark adapter and run your transfors as Spark on a cluster)
  • Show HN: SpotML – Managed ML Training on Cheap AWS/GCP Spot Instances
    6 projects | news.ycombinator.com | 3 Oct 2021
    Neat. Congratulations on the launch!

    Apart from the fact that it could deploy to both GCP and AWS, what does it do differently than AWS Batch [0]?

    When we had a similar problem, we ran jobs on spots with AWS Batch and it worked nicely enough.

    Some suggestions (for a later date):

    1. Add built-in support for Ray [1] (you'd essentially be then competing with Anyscale, which is a VC funded startup, just to contrast it with another comment on this thread) and dbt [2].

    2. Support deploying coin miners (might be good to widen the product's reach; and stand it up against the likes of consensys).

    3. Get in front of many cost optimisation consultants out there, like the Duckbill Group.

    If I may, where are you building this product from? And how many are on the team?

    Thanks.

    [0] https://aws.amazon.com/batch/use-cases/

    [1] https://ray.io/

    [2] https://getdbt.com/

  • Replacing Segment Computed & SQL Traits With dbt & RudderStack Warehouse Actions
    1 project | dev.to | 1 Oct 2021
    It will be helpful to set the stage, as no two technical stacks are the same and not all data warehouse platforms provide the same functionality. It's for the latter that we really like tools like dbt, and the sample files provided here should provide a good starting point for your specific use case. Our instance leverages the cloud version of dbt and connects to our Snowflake data warehouse, where models output tables in a designated dbt schema.
  • Your default tool for ETL
    4 projects | /r/dataengineering | 30 Sep 2021
    T: SQL - views and scheduled queries in BigQuery; planning to go hard with dbt as soon as I can find some breathing room)
  • 7 Alternatives to Using Segment
    2 projects | dev.to | 29 Sep 2021
    Since all of the data is often already in the data warehouse, the logical choice is to simply just use it as a CDP. A modern data stack should consist of an end-to-end flow from data acquisition, collection, and transformation. In most cases, the easiest way to enable this goal is by leveraging tools that are purposely designed to handle a single task. Fivetran, Snowflake, and dbt are great examples of this. In fact, this is the core technology stack that every data-driven company is adopting. Fivetran handles the entire data integration aspect providing a simple SaaS solution that helps businesses quickly move data out of their SaaS tools and into their data warehouse. Snowflake provides an easy way for organizations to consolidate their data into one location for analytics purposes. Lastly, dbt provides a simple transformation tool that is SQL-based, enabling users to create data models that can be reused. These three solutions combined create an effective data management platform.
  • Dbt with Databricks and Delta Lake?
    1 project | /r/dataengineering | 25 Aug 2021
    This is the issue: https://github.com/dbt-labs/dbt-spark/issues/161. Too bad they still haven't fixed it!

airbyte

Posts with mentions or reviews of airbyte. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-02.

What are some alternatives?

When comparing dbt-spark and airbyte you can also consider the following projects:

dbt-databricks - A dbt adapter for Databricks.

Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows

rudderstack-docs - Documentation repository for RudderStack - the Customer Data Platform for Developers.

dagster - An orchestration platform for the development, production, and observation of data assets.

Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.

Prefect - The easiest way to build, run, and monitor data pipelines at scale.

damons-data-lake - All the code related to building my own data lake

meltano

cargo-crates - An easy way to build data extractors in Docker.

jitsu - Jitsu is an open-source Segment alternative. Fully-scriptable data ingestion engine for modern data teams. Set-up a real-time data pipeline in minutes, not days

nimbo - Run compute jobs on AWS as if you were running them locally.

spark-rapids - Spark RAPIDS plugin - accelerate Apache Spark with GPUs