sqeleton
data-diff
sqeleton | data-diff | |
---|---|---|
1 | 21 | |
24 | 2,899 | |
- | - | |
7.4 | 9.4 | |
2 months ago | 10 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sqeleton
-
Oops, I wrote yet another SQLAlchemy alternative (looking for contributors!)
Here's the project homepage: https://github.com/erezsh/sqeleton/
data-diff
-
What XOR is and why it's useful
As a data engineer, who is regularly fighting
- "these two databases have different SQL dialects"
- "did we miss a few rows due to poor transaction-isolation when trying to query recently changed rows on the upstream database"
- "is there some checksum of a region of cells that accepts any arrangement of rows and columns that doesn't require me to think about ordering?"
...I've toying with trying to find a way to serialize everything consistently into something that can be XOR'd, then compare the output of XOR for two tables in two different databases that should be identical, without having to do some giant order-by comparison.
Basically, Datafold's datadiff, but in a way that could plausibly be home-rolled for on-premise applications, and not be a total maintenance nightmare.
https://github.com/datafold/data-diff
Don't have anything working yet, but it just seems like one could at least xor a bunch of integers and get something useful... Somehow.
-
How to Check 2 SQL Tables Are the Same
If the issue happen a lot, there is also: https://github.com/datafold/data-diff
That is a nice tool to do it cross database as well.
I think it's based on checksum method.
-
Oops, I wrote yet another SQLAlchemy alternative (looking for contributors!)
First, let me introduce myself. My name is Erez. You may know some of the Python libraries I wrote in the past: Lark, Preql and Data-diff.
-
Looking for Unit Testing framework in Database Migration Process
https://github.com/datafold/data-diff might be worth a look
-
Ask HN: How do you test SQL?
I did data engineering for 6 years and am building a company to automate SQL validation for dbt users.
First, by “testing SQL pipelines”, I assume you mean testing changes to SQL code as part of the development workflow? (vs. monitoring pipelines in production for failures / anomalies).
If so:
1 – assertions. dbt comes with a solid built-in testing framework [1] for expressing assertions such as “this column should have values in the list [A,B,C]” as well checking referential integrity, uniqueness, nulls, etc. There are more advanced packages on top of dbt tests [2]. The problem with assertion testing in general though is that for a moderately complex data pipeline, it’s infeasible to achieve test coverage that would cover most possible failure scenarios.
2 – data diff: for every change to SQL, know exactly how the code change affects the output data by comparing the data in dev/staging (built off the dev branch code) with the data in production (built off the main branch). We built an open-source tool for that: https://github.com/datafold/data-diff, and we are adding an integration with dbt soon which will make diffing as part of dbt development workflow one command away [2]
We make money by selling a Cloud solution for teams that integrates data diff into Github/Gitlab CI and automatically diffs every pull request to tell you the how a change to SQL affects the target table you changed, downstream tables and dependent BI tools (video demo: [3])
I’ve also written about why reliable change management is so important for data engineering and what are key best practices to implement [4]
[1] https://docs.getdbt.com/docs/build/tests
-
Data-diff v0.3: DuckDB, efficient in-database diffing and more
Hi HN:
We at Datafold are excited to announce a new release of data-diff (https://github.com/datafold/data-diff), an open-source tool that efficiently compares tables within or across a wide range of SQL databases. This release includes a lot of new features, improvements and bugfixes.
We released the first version 6 months ago because we believe that diffing data is as fundamental of a capability as diffing code in data engineering workflows. Over the past few months, we have seen data-diff being adopted for a variety of use-cases, such as validating migration and replication of data between databases (diffing source and target) and tracking the effects of code changes on data (diffing staging/dev and production environments).
With this new release data-diff is significantly faster at comparing tables within the same database, especially when there are a lot of differences between the tables. We've also added the ability to materialize the diff results into a database table, in addition to (or instead of) outputting them to stdout. We've added support for DuckDB, and for diffing schemas. Improved support for alphanumerics, and threading, and generally improved the API, the command-line interface, and stability of the tool.
We believe that data-diff is a valuable addition to the open source community, and we are committed to continue growing it and the community around it. We encourage you to try it out and let us know what you think!
You can read more about data-diff on our GitHub page at the following link: https://github.com/datafold/data-diff/
To see the list of changes for the 0.3.0 release, go here: https://github.com/datafold/data-diff/releases/tag/v0.3.0
-
data-diff VS cuallee - a user suggested alternative
2 projects | 30 Nov 2022
- Compare identical tables across databases to identify data differences (Oracle 19c)
-
How to test Data Ingestion Pipeline
For data mismatches, check out data-diff https://github.com/datafold/data-diff
- Data migration - easier way to compare legacy with new environment?
What are some alternatives?
Lark - Lark is a parsing toolkit for Python, built with a focus on ergonomics, performance and modularity.
datacompy - Pandas, Polars, Spark, and Snowpark DataFrame comparison for humans and more!
Preql - An interpreted relational query language that compiles to SQL.
cuallee - Possibly the fastest DataFrame-agnostic quality check library in town.
prism - Prism is the easiest way to develop, orchestrate, and execute data pipelines in Python.
soda-core - :zap: Data quality testing for the modern data stack (SQL, Spark, and Pandas) https://www.soda.io