data-drift
re_data
data-drift | re_data | |
---|---|---|
7 | 15 | |
301 | 1,527 | |
3.0% | 0.5% | |
9.5 | 6.6 | |
3 months ago | 17 days ago | |
HTML | HTML | |
GNU General Public License v3.0 only | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
data-drift
-
Open-Source Observability for the Semantic Layer
Think of Datadrift as a simple & open-source Monte Carlo for the semantic layer era. The repo is at https://github.com/data-drift/data-drift
Datadrift started as an internal tool built at our former company, a large European B2B Fintech. We had data reliability challenges impacting key metrics used for financial and regulatory reporting.
However, when we tried existing data quality tools we where always frustrated. They provide row-level static testing (eg. uniqueness or nullness) which does not address time-varying metrics like revenues. And commercial observability solutions costs $manyK a month and brings compliance and security overhead.
We designed Datadrift to solve these problems. Datadrift works by simply adding a monitor where your metric is computed. It then understands how your metric is computed and on which upstream tables it depends. When an issue occurs, it pinpoints exactly which rows have been updated and introducing the change.
You can also set up alerting and customise it. For example, you can decide to open and assign an Github issue to the analyst owning the revenue metric when a +10% change is detected. We tried to make it easy to customise and developer friendly.
We are thinking of adding features around root cause analysis automation/issues pattern analysis to help data teams improve metrics quality overtime. We’d love to hear your feature requests.
Datadrift is built with Python and Go, and licensed under GPL. Our docs are here: https://github.com/data-drift/data-drift?tab=readme-ov-file#...
Dev set up and demo : https://app.claap.io/sammyt/drift-db-demo-a18-c-ApwBh9kt4p-0...
We’re very eager to get your feedback!
-
Would learn Go to contribute to an OS project ? Or should I stick to python ?
I have already started working on it, I started in Go for some part, but I needed python to deploy a Pypi lib. Now its hybrid, and I prefer working with go 😬 but the most rational thinking leads to python.
-
Ask HN: Dear startup founders, what have you developed in-house?
We used static testing framework like great expectations but that was not enough. We did not have the budget for the big data observability players like Monte Carlo, so we kept it simple.
Repo if interested: https://github.com/data-drift/data-drift
(Disclaimer: I am focusing full time on this project to see if it's an interesting business opportunity. It's 100% open-source -- feedback welcome!)
-
Show HN: Lineage X Snapshot Tooling
https://app.data-drift.io/42527392/Lucasdvrs/dbt-datagit/ove...
You can "technically" install it by yourself, but tbh our focus are on the features, not the adoption. If you are interested it takes roughly 1 hour to configure (choose the data you want to observe, run a python function, install a Github app, add a configuration file), contact us.
The repo: https://github.com/data-drift/data-drift
Roast me
- Non-moving data is a journey
- “Non moving data” is like “Bug free”, it's a lie
re_data
-
How to design a software for extracting and validating data in existing DB(s)
There’s also this open source tool I think is doing kind of what the OP is looking for, re_data. The source code lives here: https://github.com/re-data/re-data
-
What are the 5 hottest dbt Repositories one should star on GitHub 2022?
What are the 5 hottest dbt Repositories one should star on Github 2022?
dbt is a software framework that sits in the middle of the ELT process. It represents the transformative layer after loading data from an original source. Dbt combines SQL with software engineering principles.
Here are my top5!
- Lightdash (https://github.com/lightdash/lightdash): Lightdash converts dbt models and makes it possible to define and easily visualize additional metrics via a visual interface.
- ⏎ re_data (https://github.com/re-data/re-data): Re-Data is an abstraction layer that helps users monitor dbt projects and their underlying data. For example, you get alerts when a test failed or a data anomaly occurs in a dbt project.
- evidence (https://github.com/evidence-dev/evidence): Evidence is another tool for lightweight BI reporting. With Evidence, you can build simple reports in "medium style" using SQL queries and Markdown.
- Kuwala (https://github.com/kuwala-io/kuwala): With Kuwala, a BI analyst can intuitively build advanced data workflows using a drag-drop interface on top of the modern data stack without coding. Behind the Scenes, the dbt models are generated so that a more experienced engineer can customize the pipelines at any time.
- fal ai (https://github.com/fal-ai/fal): Fal helps to run Python scripts directly from the dbt project. For example, you can load dbt models directly into the Python context which helps to apply Data Science libraries like SKlearn and Prophet in the dbt models.
-
What are the hottest dbt Repositories you should star on Github 2022? - Here are mine.
re_data ( https://github.com/re-data/re-data ) Re_data is an abstraction layer that helps users monitor dbt projects and their underlying data. For example, you get alerts when a test failed or a data anomaly occurs in a dbt project and which underlying metric is affected. In addition, the lineage graph is also intuitively displayed. Re-data is one of two others frameworks focusing on the observability aspect of lengthy pipelines in dbt (check also out: open-metadata and Elementary).
-
What are your hottest dbt repositories in 2022 so far? Here are mine!
- ⏎ re_data: Re-Data is an abstraction layer that helps users monitor dbt projects and their underlying data. For example, you get alerts when a test failed or a data anomaly occurs in a dbt project.
-
Snowflake SQL AST parser?
Some things you might be interested in are re_data and Elementary Data.
-
Sentry for Data Teams
Around a year ago I launched re_data (an open-source data reliability tool) here. After some pivots, we seem to be getting traction and this is how it looks now: https://www.getre.io/. Super interested in getting your feedback and suggestions on the direction :)
-
Launch HN: Elementary (YC W22) – Open-source data observability
Nice project, at re_data we just got over a lot of your new updates and it seems a quite large part of your project is "inspired" by code from our library https://github.com/re-data/re-data. Even with parts, we are not especially proud of ;)
If you decide to copy not only ideas but a big part of internal implementation, I think you should include that information in your LICENSE.
Cheers
- How are you guys testing your data?
-
great_expectations VS redata - a user suggested alternative
2 projects | 24 Sep 2021
It's more convenient when you are already using dbt and don't want to set up a separate workflow for testing data when it can be done with dbt inside the data warehouse. Also the thing re_data does well is letting you create time-based metrics about your data quality instead of just tests (a lot of the tests can be rewritten to that) That allows you to do a couple of things more than GE, you can for example easily visualize or look for anomalies in those. You can also compute tests much more efficiently. Research about computing metrics as a good way of doing data quality was actually done by the team behind deequ: http://www.vldb.org/pvldb/vol11/p1781-schelter.pdf I'm the author, so obviously I'm a bit biased :)
- re_data - open-source data quality library build on top of dbt.
What are some alternatives?
lakeFS - lakeFS - Data version control for your data lake | Git for data
elementary - The dbt-native data observability solution for data & analytics engineers. Monitor your data pipelines in minutes. Available as self-hosted or cloud service with premium features.
soda-core - :zap: Data quality testing for the modern data stack (SQL, Spark, and Pandas) https://www.soda.io
great_expectations - Always know what to expect from your data.
lightdash - Self-serve BI to 10x your data team ⚡️
dbt-data-reliability - dbt package that is part of Elementary, the dbt-native data observability solution for data & analytics engineers. Monitor your data pipelines in minutes. Available as self-hosted or cloud service with premium features.
tellery - Tellery lets you build metrics using SQL and bring them to your team. As easy as using a document. As powerful as a data modeling tool.
sqllineage - SQL Lineage Analysis Tool powered by Python
OpenMetadata - Open Standard for Metadata. A Single place to Discover, Collaborate and Get your data right.
soda-sql - Data profiling, testing, and monitoring for SQL accessible data.
fullnamematchscore-go - Generates a match score of two person names from 0-100, where 100 is the highest, on how closely two individual full names match. The scoring is based on a series of tests, algorithms, AI, and an ever-growing body of Machine Learning-based generated knowledge
gradio - Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!