dbt-spark VS cargo-crates

Compare dbt-spark vs cargo-crates and see what are their differences.

dbt-spark

dbt-spark contains all of the code enabling dbt to work with Apache Spark and Databricks (by dbt-labs)

cargo-crates

An easy way to build data extractors in Docker. (by dacort)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
dbt-spark cargo-crates
7 3
364 1
1.6% -
8.6 3.1
6 days ago 20 days ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

dbt-spark

Posts with mentions or reviews of dbt-spark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-12-31.
  • Trying Delta Lake at home
    5 projects | /r/dataengineering | 31 Dec 2022
    Spark + dbt => https://github.com/dbt-labs/dbt-spark/blob/main/docker-compose.yml
  • So now dbt is worth $4.2b! Yes, that's a "b" for billion.
    2 projects | /r/dataengineering | 25 Feb 2022
    So the idea is you land your data raw in a Delta bronze layer, and then use dbt models to propagate that data forward to silver, gold, do all of your data quality, etc. and all of the actual execution is happening on a Databricks SQL endpoint (or you can use the dbt-spark adapter and run your transfors as Spark on a cluster)
  • Show HN: SpotML – Managed ML Training on Cheap AWS/GCP Spot Instances
    6 projects | news.ycombinator.com | 3 Oct 2021
    Neat. Congratulations on the launch!

    Apart from the fact that it could deploy to both GCP and AWS, what does it do differently than AWS Batch [0]?

    When we had a similar problem, we ran jobs on spots with AWS Batch and it worked nicely enough.

    Some suggestions (for a later date):

    1. Add built-in support for Ray [1] (you'd essentially be then competing with Anyscale, which is a VC funded startup, just to contrast it with another comment on this thread) and dbt [2].

    2. Support deploying coin miners (might be good to widen the product's reach; and stand it up against the likes of consensys).

    3. Get in front of many cost optimisation consultants out there, like the Duckbill Group.

    If I may, where are you building this product from? And how many are on the team?

    Thanks.

    [0] https://aws.amazon.com/batch/use-cases/

    [1] https://ray.io/

    [2] https://getdbt.com/

  • Replacing Segment Computed & SQL Traits With dbt & RudderStack Warehouse Actions
    1 project | dev.to | 1 Oct 2021
    It will be helpful to set the stage, as no two technical stacks are the same and not all data warehouse platforms provide the same functionality. It's for the latter that we really like tools like dbt, and the sample files provided here should provide a good starting point for your specific use case. Our instance leverages the cloud version of dbt and connects to our Snowflake data warehouse, where models output tables in a designated dbt schema.
  • Your default tool for ETL
    4 projects | /r/dataengineering | 30 Sep 2021
    T: SQL - views and scheduled queries in BigQuery; planning to go hard with dbt as soon as I can find some breathing room)
  • 7 Alternatives to Using Segment
    2 projects | dev.to | 29 Sep 2021
    Since all of the data is often already in the data warehouse, the logical choice is to simply just use it as a CDP. A modern data stack should consist of an end-to-end flow from data acquisition, collection, and transformation. In most cases, the easiest way to enable this goal is by leveraging tools that are purposely designed to handle a single task. Fivetran, Snowflake, and dbt are great examples of this. In fact, this is the core technology stack that every data-driven company is adopting. Fivetran handles the entire data integration aspect providing a simple SaaS solution that helps businesses quickly move data out of their SaaS tools and into their data warehouse. Snowflake provides an easy way for organizations to consolidate their data into one location for analytics purposes. Lastly, dbt provides a simple transformation tool that is SQL-based, enabling users to create data models that can be reused. These three solutions combined create an effective data management platform.
  • Dbt with Databricks and Delta Lake?
    1 project | /r/dataengineering | 25 Aug 2021
    This is the issue: https://github.com/dbt-labs/dbt-spark/issues/161. Too bad they still haven't fixed it!

cargo-crates

Posts with mentions or reviews of cargo-crates. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-09-30.
  • Docker - Magic or Hype?
    1 project | /r/dataengineering | 6 Apr 2023
    I've used this benefit in one of my personal side projects (cargo-crates) to have ready-made containers for data extraction purposes. I'm always picking up projects and putting them back down, or shifting which versions of different libraries I have on my laptop, so picking up an old project with specific library dependencies can be really annoying.
  • Your default tool for ETL
    4 projects | /r/dataengineering | 30 Sep 2021
    I went a little crazy and built my own set of data extractors that I can deploy with CDK to ECS.
  • Why is it so hard to think of a DE side project idea ?
    3 projects | /r/dataengineering | 29 Jun 2021
    - Extract data from system. I wear an Oura ring for sleep tracking. I wanted to do my own analysis of the data, so I built a system that could easily allow me to extract the data into S3 so I could query it. https://github.com/dacort/cargo-crates Will anybody find that useful? Maybe...but it's been a heck of a lot of fun and really pushed my Docker skills.

What are some alternatives?

When comparing dbt-spark and cargo-crates you can also consider the following projects:

dbt-databricks - A dbt adapter for Databricks.

airflow-docker - This is my Apache Airflow Local development setup on Windows 10 WSL2/Mac using docker-compose. It will also include some sample DAGs and workflows.

rudderstack-docs - Documentation repository for RudderStack - the Customer Data Platform for Developers.

Prefect - The easiest way to build, run, and monitor data pipelines at scale.

Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.

airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.

damons-data-lake - All the code related to building my own data lake

Apache Superset - Apache Superset is a Data Visualization and Data Exploration Platform [Moved to: https://github.com/apache/superset]

nimbo - Run compute jobs on AWS as if you were running them locally.

trino-getting-started

criu-image-streamer - Enables streaming of images to and from CRIU during checkpoint/restore with low overhead