projects VS dbt-core

Compare projects vs dbt-core and see what are their differences.

projects

Sample projects using Ploomber. (by ploomber)

dbt-core

dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications. (by dbt-labs)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
projects dbt-core
19 86
77 8,906
- 2.1%
4.7 9.7
3 months ago 4 days ago
Jupyter Notebook Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

projects

Posts with mentions or reviews of projects. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-08-08.
  • Analyze and plot 5.5M records in 20s with BigQuery and Ploomber
    2 projects | dev.to | 8 Aug 2022
    You can look at the files in detail here. For this tutorial, I'll quickly mention a few crucial details.
  • Three Tools for Executing Jupyter Notebooks
    6 projects | dev.to | 25 Jul 2022
    Ploomber is the complete solution for notebook execution. It builds on top of papermill and extends it to allow writing multi-stage workflows where each task is a notebook. Meanwhile, it automatically manages orchestration. Hence you can run notebooks in parallel without having to write extra code.
  • OOP in python ETL?
    3 projects | /r/dataengineering | 14 Mar 2022
    The answer is YES, you can take advantage of OOP best practices to write good ETLs. For instance in this Ploomber sample ETL You can see there's a mix of .sql and .py files, it's within modular components so it's easier to test, deploy and execute. It's way easier than airflow since there's no infra work involved, you only have to setup your pipeline.yaml file. This also allows you to make the code WAY more maintainable and scalable, avoid redundant code and deploy faster :)
  • What are some good DS/ML repos where I can learn about structuring a DS/ML project?
    3 projects | /r/datascience | 27 Feb 2022
    We have tons of examples that follow a standard layout, here’s one: https://github.com/ploomber/projects/tree/master/templates/ml-intermediate
  • Anyone's org using Airflow as a generalized job orchestator, not just for data engineering/ETL?
    2 projects | /r/dataengineering | 23 Feb 2022
    I can talk about the open-source I'm working on Ploomber (https://github.com/ploomber/ploomber), it's focusing on seamless integration with Jupyter and IDEs. It allows an easy mechanism to orchestrate work for instance, here's an example SQL ETL and then you can deploy it anywhere, so if you're working with Airflow, it'll deploy it there too but without the complexity. You wouldn't have to maintain docker images etc.
  • ETL with python
    3 projects | /r/ETL | 20 Feb 2022
    I recommend using Ploomber which can help you build once and automate a lot of the work, and it works with python natively. It's open source so you can start with one of the examples, like the ML-basic example or the ETL one. It'll allow you to define the pipeline and then easily explain the flow with the DAG plot. Feel free to ask questions, I'm happy to help (I've built 100s of data pipelines over the years).
  • What tools do you use for data quality?
    2 projects | /r/dataengineering | 8 Feb 2022
    I'm not sure what pipeline frameworks support this kind of testing, but after successfully implementing this workflow, I added this feature to Ploomber, the project I'm working on. Here's how a pipeline looks like, and here's a tutorial.
  • Data pipeline suggestions
    13 projects | /r/dataengineering | 4 Feb 2022
    Check out Ploomber, (disclaimer: I'm the author) it has a simple API, and you can export to Airflow, AWS, Kubernetes. Supports all databases that work with Python and you can seamlessly transfer from a SQL step to a Python step. Here's an example.
  • ETL Tools
    2 projects | /r/BusinessIntelligence | 4 Feb 2022
    Without more specifics about your use case, it's hard to give more specific advice. But check out Ploomber (disclaimer: I'm the creator) - here's an example ETL pipeline. I've used it in past projects to develop Oracle ETL pipelines. Modularizing the analysis in many parts helps a lot with maintenance.
  • Whats something hot rn or whats going to be next thing we should focus on in data engineering?
    4 projects | /r/dataengineering | 3 Feb 2022
    Yes! (tell your friend). You can write shell scripts so you can execute that 2002 code :) You can test it locally and then run it in AWS Batch/Argo. Here's an example

dbt-core

Posts with mentions or reviews of dbt-core. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-16.
  • Dbt
    1 project | news.ycombinator.com | 18 Feb 2024
  • Relational is more than SQL
    11 projects | news.ycombinator.com | 16 Sep 2023
    dbt integration was one of our major goals early on but we found that the interaction wasn't as straightforward as had hoped.

    There is an open PR in the dbt repo: https://github.com/dbt-labs/dbt-core/pull/5982#issuecomment-...

    I have some ideas about future directions in this space where I believe PRQL could really shine. I will only be able to write those down in a couple of hours. I think this could be a really exciting direction for the project to grow into if anyone would like to collaborate and contribute!

  • How to Level Up Beyond ETLs: From Query Optimization to Code Generation
    1 project | news.ycombinator.com | 6 Sep 2023
    > Could you share more specific details? Happy to look over / revise where needed.

    Sure thing! I'd say first off, the solutions may look different for a small company/startup vs. a large enterprise. It can help if you explain the scale at which you are solving for.

    On the enterprise side of things, they tend to buy solutions rather than build them in-house. Things like Informatica, Talend, etc. are common for large enterprises whose primary products are not data or software related. They just don't have the will, expertise, or the capital to invest in building and maintaining these solutions in-house so they just buy them off the shelf. On the surface, these are very expensive products, but even in the face of that it can still make sense for large enterprises in terms of the bottom line to buy rather than build.

    For startups and smaller companies, have you looked at something like `dbt` (https://github.com/dbt-labs/dbt-core) ? I understand the desire to write some code, but often times there are already existing solutions for the problems you might be encountering.

    ORM's should typically only exist on the consumer-side of the equation, if at all. A lot of business intelligence / business analysts are just going to use tools like Tableau and hook up to the data warehouse via a connector to visualize their data. You might have some consumers that are more sophisticated and may want to write some custom post-processing or aggregation code, and they could certainly use ORM's if they choose, but it isn't something you should enforce on them because it's a poor place to validate data since as mentioned there are different ways/tools to access the data and not all of them are going to go through your python SDK.

    Indeed in a large enough company, you are going to have producers and consumers that are going to use different tools and programming languages, so it's a little bit presumptuous to write an SDK in python there.

    Another thing to talk about, and this probably mostly applies to larger companies - have you looked at an architecture like a distributed data mesh (https://martinfowler.com/articles/data-mesh-principles.html)? This might be something to bring to the CTO more than try to push for yourself, but it can completely change the landscape of what you are doing.

    > More broadly is the issue of the gap of what you think the role is, and what the role actually is when you join. There are definitely cases where this is accidental. The best way I can think of to close the gap is to maybe do a short-term contract, but may be challenging to do under time constraints etc.

    Yeah this definitely sucks and it's not an enviable position to be in. I guess you have a choice to look for another job or try to stick it out with the company that did this to you. It's possible there is a geniune existential crisis for the company and a good reason why they did the bait-and-switch. Maybe it pays to stay, especially if you have equity in the company. On the other hand, it could also be the case that it is the result of questionable practices at the company. It's hard to make that call.

  • Python: Just Write SQL
    21 projects | news.ycombinator.com | 14 Aug 2023
    I really dislike SQL, but recognize its importance for many organizations. I also understand that SQL is definitely testable, particularly if managed by environments such as DBT (https://github.com/dbt-labs/dbt-core). Those who arrived here with preference to python will note that dbt is largely implemented in python, adds Jinja macros and iterative forms to SQL, and adds code testing capabilities.
  • Transform Your Data Like a Pro With dbt (Data Build Tool)
    2 projects | dev.to | 8 Jun 2023
    3). Data Build Tool Repository.
  • What are your thoughts on dbt Cloud vs other managed dbt Core platforms?
    1 project | /r/dataengineering | 23 May 2023
    dbt Cloud rightfully gets a lot of credit for creating dbt Core and for being the first managed dbt Core platform, but there are several entrants in the market; from those who just run dbt jobs like Fivetran to platforms that offer more like EL + T like Mozart Data and Datacoves which also has hosted VS Code editor for dbt development and Airflow.
  • How do I build a docker image based on a Dockerfile on github?
    2 projects | /r/docker | 5 May 2023
  • Dbt vs. SqlMesh
    1 project | /r/dataengineering | 29 Apr 2023
    Ahh I misunderstood, yes column level lineage is useful. DBT prefers leveraging macros which sort of breaks this pattern. I think the DBT way would be to better separate fields into upstream models and use table tracking https://github.com/dbt-labs/dbt-core/discussions/4458
  • DBT core v1.5 released
    2 projects | /r/dataengineering | 28 Apr 2023
    Here’s the PR, which includes a what/how/why: https://github.com/dbt-labs/dbt-core/issues/7158
  • DBT Install
    1 project | /r/Supabase | 22 Mar 2023
    I've attached a link to their documentation. DBT is becoming increasingly popular within the Data Engineering community with over 5k stars on github.

What are some alternatives?

When comparing projects and dbt-core you can also consider the following projects:

cookiecutter-data-science - A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.

airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.

ploomber - The fastest ⚡️ way to build data pipelines. Develop iteratively, deploy anywhere. ☁️

metricflow - MetricFlow allows you to define, build, and maintain metrics in code.

Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.

Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows

jitsu - Jitsu is an open-source Segment alternative. Fully-scriptable data ingestion engine for modern data teams. Set-up a real-time data pipeline in minutes, not days

n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.

Python Packages Project Generator - 🚀 Your next Python package needs a bleeding-edge project structure.

citus - Distributed PostgreSQL as an extension

castled - Castled is an open source reverse ETL solution that helps you to periodically sync the data in your db/warehouse into sales, marketing, support or custom apps without any help from engineering teams

dagster - An orchestration platform for the development, production, and observation of data assets.