data-drift
soda-sql
data-drift | soda-sql | |
---|---|---|
7 | 25 | |
301 | 50 | |
3.0% | - | |
9.5 | 8.2 | |
3 months ago | over 1 year ago | |
HTML | Python | |
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
data-drift
-
Open-Source Observability for the Semantic Layer
Think of Datadrift as a simple & open-source Monte Carlo for the semantic layer era. The repo is at https://github.com/data-drift/data-drift
Datadrift started as an internal tool built at our former company, a large European B2B Fintech. We had data reliability challenges impacting key metrics used for financial and regulatory reporting.
However, when we tried existing data quality tools we where always frustrated. They provide row-level static testing (eg. uniqueness or nullness) which does not address time-varying metrics like revenues. And commercial observability solutions costs $manyK a month and brings compliance and security overhead.
We designed Datadrift to solve these problems. Datadrift works by simply adding a monitor where your metric is computed. It then understands how your metric is computed and on which upstream tables it depends. When an issue occurs, it pinpoints exactly which rows have been updated and introducing the change.
You can also set up alerting and customise it. For example, you can decide to open and assign an Github issue to the analyst owning the revenue metric when a +10% change is detected. We tried to make it easy to customise and developer friendly.
We are thinking of adding features around root cause analysis automation/issues pattern analysis to help data teams improve metrics quality overtime. We’d love to hear your feature requests.
Datadrift is built with Python and Go, and licensed under GPL. Our docs are here: https://github.com/data-drift/data-drift?tab=readme-ov-file#...
Dev set up and demo : https://app.claap.io/sammyt/drift-db-demo-a18-c-ApwBh9kt4p-0...
We’re very eager to get your feedback!
-
Would learn Go to contribute to an OS project ? Or should I stick to python ?
I have already started working on it, I started in Go for some part, but I needed python to deploy a Pypi lib. Now its hybrid, and I prefer working with go 😬 but the most rational thinking leads to python.
-
Ask HN: Dear startup founders, what have you developed in-house?
We used static testing framework like great expectations but that was not enough. We did not have the budget for the big data observability players like Monte Carlo, so we kept it simple.
Repo if interested: https://github.com/data-drift/data-drift
(Disclaimer: I am focusing full time on this project to see if it's an interesting business opportunity. It's 100% open-source -- feedback welcome!)
-
Show HN: Lineage X Snapshot Tooling
https://app.data-drift.io/42527392/Lucasdvrs/dbt-datagit/ove...
You can "technically" install it by yourself, but tbh our focus are on the features, not the adoption. If you are interested it takes roughly 1 hour to configure (choose the data you want to observe, run a python function, install a Github app, add a configuration file), contact us.
The repo: https://github.com/data-drift/data-drift
Roast me
- Non-moving data is a journey
- “Non moving data” is like “Bug free”, it's a lie
soda-sql
-
Data Quality - Great Expectations for Data Engineers
I might be a bit biased, but that was my opinion before even I started contributing to Soda SQL.
- dbt vs R/Python for transformation
-
SodaCL - preview of a new "data reliability as code" language
I'm one of the developers of the Open Source soda-sql data quality monitoring library, and over the past year we got some incredible feedback from our users, and based on that we started working on a new DSL for data reliability as code we are calling Soda CL.
-
How do you test your pipelines?
You can also use soda-sql to do checks on your warehouses separately. Both Soda SQL and Soda Spark are OSS/Apache licensed.
-
Being constantly shut down by more senior team members when I mention adding some QA in our work
As many have said, there might be business side of things to deliver. Somebody above promised delivery with tight deadlines. Trust me, I am not a fan, but this how the world works and it sucks. I would say in your free time, explore tools like greatexpectations.io https://greatexpectations.io/ or https://github.com/sodadata/soda-sql which are modern ways of testing in your learning curve
- Soda
- How heavily do you use Great Expectations?
-
What are some exciting new tools/libraries in 2021?
soda-sql really cool library to automate data quality checks on SQL tables
-
How do I incorporate testing after the fact?
Look at SodaSQL. It's more enterprise focused than Great Expectations and you can pipe results to a database for downstream actions and analysis.
-
Data Testing Tools, Pytest vs Great Expectations vs Soda vs Deequ
Certainly! It’s not requested that much 😊 but please add an issue on GitHub . I would love to add at least experimental support.
What are some alternatives?
lakeFS - lakeFS - Data version control for your data lake | Git for data
deequ - Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
soda-core - :zap: Data quality testing for the modern data stack (SQL, Spark, and Pandas) https://www.soda.io
pandera - A light-weight, flexible, and expressive statistical data testing library
lightdash - Self-serve BI to 10x your data team ⚡️
sqlfluff - A modular SQL linter and auto-formatter with support for multiple dialects and templated code.
tellery - Tellery lets you build metrics using SQL and bring them to your team. As easy as using a document. As powerful as a data modeling tool.
dbt-sessionization - Using DBT for Creating Session Abstractions on RudderStack - an open-source, warehouse-first customer data pipeline and Segment alternative.
OpenMetadata - Open Standard for Metadata. A Single place to Discover, Collaborate and Get your data right.
re_data - re_data - fix data issues before your users & CEO would discover them 😊
fullnamematchscore-go - Generates a match score of two person names from 0-100, where 100 is the highest, on how closely two individual full names match. The scoring is based on a series of tests, algorithms, AI, and an ever-growing body of Machine Learning-based generated knowledge
trino_data_mesh - Proof of concept on how to gain insights with Trino across different databases from a distributed data mesh