dbt-core
metricflow
Our great sponsors
dbt-core | metricflow | |
---|---|---|
86 | 4 | |
8,881 | 1,073 | |
3.4% | 2.5% | |
9.7 | 9.8 | |
5 days ago | 2 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dbt-core
- Dbt
-
Relational is more than SQL
dbt integration was one of our major goals early on but we found that the interaction wasn't as straightforward as had hoped.
There is an open PR in the dbt repo: https://github.com/dbt-labs/dbt-core/pull/5982#issuecomment-...
I have some ideas about future directions in this space where I believe PRQL could really shine. I will only be able to write those down in a couple of hours. I think this could be a really exciting direction for the project to grow into if anyone would like to collaborate and contribute!
-
How to Level Up Beyond ETLs: From Query Optimization to Code Generation
> Could you share more specific details? Happy to look over / revise where needed.
Sure thing! I'd say first off, the solutions may look different for a small company/startup vs. a large enterprise. It can help if you explain the scale at which you are solving for.
On the enterprise side of things, they tend to buy solutions rather than build them in-house. Things like Informatica, Talend, etc. are common for large enterprises whose primary products are not data or software related. They just don't have the will, expertise, or the capital to invest in building and maintaining these solutions in-house so they just buy them off the shelf. On the surface, these are very expensive products, but even in the face of that it can still make sense for large enterprises in terms of the bottom line to buy rather than build.
For startups and smaller companies, have you looked at something like `dbt` (https://github.com/dbt-labs/dbt-core) ? I understand the desire to write some code, but often times there are already existing solutions for the problems you might be encountering.
ORM's should typically only exist on the consumer-side of the equation, if at all. A lot of business intelligence / business analysts are just going to use tools like Tableau and hook up to the data warehouse via a connector to visualize their data. You might have some consumers that are more sophisticated and may want to write some custom post-processing or aggregation code, and they could certainly use ORM's if they choose, but it isn't something you should enforce on them because it's a poor place to validate data since as mentioned there are different ways/tools to access the data and not all of them are going to go through your python SDK.
Indeed in a large enough company, you are going to have producers and consumers that are going to use different tools and programming languages, so it's a little bit presumptuous to write an SDK in python there.
Another thing to talk about, and this probably mostly applies to larger companies - have you looked at an architecture like a distributed data mesh (https://martinfowler.com/articles/data-mesh-principles.html)? This might be something to bring to the CTO more than try to push for yourself, but it can completely change the landscape of what you are doing.
> More broadly is the issue of the gap of what you think the role is, and what the role actually is when you join. There are definitely cases where this is accidental. The best way I can think of to close the gap is to maybe do a short-term contract, but may be challenging to do under time constraints etc.
Yeah this definitely sucks and it's not an enviable position to be in. I guess you have a choice to look for another job or try to stick it out with the company that did this to you. It's possible there is a geniune existential crisis for the company and a good reason why they did the bait-and-switch. Maybe it pays to stay, especially if you have equity in the company. On the other hand, it could also be the case that it is the result of questionable practices at the company. It's hard to make that call.
-
Python: Just Write SQL
I really dislike SQL, but recognize its importance for many organizations. I also understand that SQL is definitely testable, particularly if managed by environments such as DBT (https://github.com/dbt-labs/dbt-core). Those who arrived here with preference to python will note that dbt is largely implemented in python, adds Jinja macros and iterative forms to SQL, and adds code testing capabilities.
-
Transform Your Data Like a Pro With dbt (Data Build Tool)
3). Data Build Tool Repository.
-
What are your thoughts on dbt Cloud vs other managed dbt Core platforms?
dbt Cloud rightfully gets a lot of credit for creating dbt Core and for being the first managed dbt Core platform, but there are several entrants in the market; from those who just run dbt jobs like Fivetran to platforms that offer more like EL + T like Mozart Data and Datacoves which also has hosted VS Code editor for dbt development and Airflow.
- How do I build a docker image based on a Dockerfile on github?
-
Dbt vs. SqlMesh
Ahh I misunderstood, yes column level lineage is useful. DBT prefers leveraging macros which sort of breaks this pattern. I think the DBT way would be to better separate fields into upstream models and use table tracking https://github.com/dbt-labs/dbt-core/discussions/4458
-
DBT core v1.5 released
Here’s the PR, which includes a what/how/why: https://github.com/dbt-labs/dbt-core/issues/7158
-
DBT Install
I've attached a link to their documentation. DBT is becoming increasingly popular within the Data Engineering community with over 5k stars on github.
metricflow
- MetricFlow allows you to define, build, and maintain metrics in code.
-
Show HN: MetricFlow – open-source metric framework
Three things:
First, MetricFlow does not currently support MySQL. We launched with support for BigQuery, Redshift, and Snowflake. I have opened an issue to add support for MySQL (and similar issues for other SQL engines are coming): https://github.com/transform-data/metricflow/issues/27
Second, what we call a data source is more similar to a table in a database, rather than the underlying database service itself. Metricflow itself is useful when you're using a single SQL engine - indeed, that's all we support today - but it is most useful when you're in a world where joins are a thing. That said, if you have one big data table you might still find it useful to have declarative metric definitions defined in Metricflow. Suppose, for example, you had a big NoSQL style table filled with JSON objects. You might define a few data sources that normalize those JSON objects into top level elements (identifiers, dimensions, aggregated measures) using the sql_query data source config attribute, and then that'd allow you to support structured queries on the data consumption end while pushing unstructured blobs from your application layer. This will be slow at query time, and only as reliable as the level of discipline exerted in your application development workflow, but it's possible.
Third, if we did support MySQL you'd basically connect to it via standard connection parameters - we have a config file where you can store the required information and then we'll manage the connections for you. However, I'm not familiar with uxwizz, and a quick perusal of their documentation did not turn up how one goes about connecting to the underlying DB. It's likely I just missed this, but at any rate I don't know how it is done. If they don't support standard MySQL client connections you'd need to write an adapter of some kind against whatever DB connection APIs they provide, in which case you'd likely need to roll a custom implementation of MetricFlow's SqlClient interface and initialize the MetricFlowEngine with that.
What are some alternatives?
airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
dbt_metrics - Macros for calculating metrics
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Apache Superset - Apache Superset is a Data Visualization and Data Exploration Platform [Moved to: https://github.com/apache/superset]
n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.
dictum - Describe business metrics with YAML, query and visualize in Jupyter with zero SQL
citus - Distributed PostgreSQL as an extension
dbt - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications. [Moved to: https://github.com/dbt-labs/dbt-core]
dagster - An orchestration platform for the development, production, and observation of data assets.
datafluent_pg - Build a better understanding of your data in PostgreSQL.
argo-navis - Argo Navis repository for research, docs and misc items
streamlit - Streamlit — A faster way to build and share data apps.