data-diff
prism
data-diff | prism | |
---|---|---|
21 | 7 | |
2,899 | 85 | |
- | - | |
9.4 | 5.4 | |
11 months ago | 5 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
data-diff
-
What XOR is and why it's useful
As a data engineer, who is regularly fighting
- "these two databases have different SQL dialects"
- "did we miss a few rows due to poor transaction-isolation when trying to query recently changed rows on the upstream database"
- "is there some checksum of a region of cells that accepts any arrangement of rows and columns that doesn't require me to think about ordering?"
...I've toying with trying to find a way to serialize everything consistently into something that can be XOR'd, then compare the output of XOR for two tables in two different databases that should be identical, without having to do some giant order-by comparison.
Basically, Datafold's datadiff, but in a way that could plausibly be home-rolled for on-premise applications, and not be a total maintenance nightmare.
https://github.com/datafold/data-diff
Don't have anything working yet, but it just seems like one could at least xor a bunch of integers and get something useful... Somehow.
-
How to Check 2 SQL Tables Are the Same
If the issue happen a lot, there is also: https://github.com/datafold/data-diff
That is a nice tool to do it cross database as well.
I think it's based on checksum method.
-
Oops, I wrote yet another SQLAlchemy alternative (looking for contributors!)
First, let me introduce myself. My name is Erez. You may know some of the Python libraries I wrote in the past: Lark, Preql and Data-diff.
-
Looking for Unit Testing framework in Database Migration Process
https://github.com/datafold/data-diff might be worth a look
-
Ask HN: How do you test SQL?
I did data engineering for 6 years and am building a company to automate SQL validation for dbt users.
First, by “testing SQL pipelines”, I assume you mean testing changes to SQL code as part of the development workflow? (vs. monitoring pipelines in production for failures / anomalies).
If so:
1 – assertions. dbt comes with a solid built-in testing framework [1] for expressing assertions such as “this column should have values in the list [A,B,C]” as well checking referential integrity, uniqueness, nulls, etc. There are more advanced packages on top of dbt tests [2]. The problem with assertion testing in general though is that for a moderately complex data pipeline, it’s infeasible to achieve test coverage that would cover most possible failure scenarios.
2 – data diff: for every change to SQL, know exactly how the code change affects the output data by comparing the data in dev/staging (built off the dev branch code) with the data in production (built off the main branch). We built an open-source tool for that: https://github.com/datafold/data-diff, and we are adding an integration with dbt soon which will make diffing as part of dbt development workflow one command away [2]
We make money by selling a Cloud solution for teams that integrates data diff into Github/Gitlab CI and automatically diffs every pull request to tell you the how a change to SQL affects the target table you changed, downstream tables and dependent BI tools (video demo: [3])
I’ve also written about why reliable change management is so important for data engineering and what are key best practices to implement [4]
[1] https://docs.getdbt.com/docs/build/tests
-
Data-diff v0.3: DuckDB, efficient in-database diffing and more
Hi HN:
We at Datafold are excited to announce a new release of data-diff (https://github.com/datafold/data-diff), an open-source tool that efficiently compares tables within or across a wide range of SQL databases. This release includes a lot of new features, improvements and bugfixes.
We released the first version 6 months ago because we believe that diffing data is as fundamental of a capability as diffing code in data engineering workflows. Over the past few months, we have seen data-diff being adopted for a variety of use-cases, such as validating migration and replication of data between databases (diffing source and target) and tracking the effects of code changes on data (diffing staging/dev and production environments).
With this new release data-diff is significantly faster at comparing tables within the same database, especially when there are a lot of differences between the tables. We've also added the ability to materialize the diff results into a database table, in addition to (or instead of) outputting them to stdout. We've added support for DuckDB, and for diffing schemas. Improved support for alphanumerics, and threading, and generally improved the API, the command-line interface, and stability of the tool.
We believe that data-diff is a valuable addition to the open source community, and we are committed to continue growing it and the community around it. We encourage you to try it out and let us know what you think!
You can read more about data-diff on our GitHub page at the following link: https://github.com/datafold/data-diff/
To see the list of changes for the 0.3.0 release, go here: https://github.com/datafold/data-diff/releases/tag/v0.3.0
-
data-diff VS cuallee - a user suggested alternative
2 projects | 30 Nov 2022
- Compare identical tables across databases to identify data differences (Oracle 19c)
-
How to test Data Ingestion Pipeline
For data mismatches, check out data-diff https://github.com/datafold/data-diff
- Data migration - easier way to compare legacy with new environment?
prism
- Prism: the easiest way to create robust data workflows. Accessible via CLI
- Show HN: Prism – a framework for creating robust data science workflows
- Show HN: Prism – Data Orchestration in Python
-
Introducing Prism: A Novel, Open-Source Data Orchestration Software. Feedback needed!
🔗 Website: https://runprism.com/
By joining our Alpha testing phase, you have the unique opportunity to be among the first users to experience Prism in action. Your invaluable feedback will directly impact the development of this platform, helping us make it even better, more stable, and tailored to your needs. Visit our website https://runprism.com to learn more about the platform and its features. In addition, check out our documentation at https://docs.runprism.com to get started right away! Access the GitHub repository https://github.com/runprism/prism to view the source code, report issues, and contribute to the project. Try out Prism in your own workflow environment and let us know what you think! We highly encourage you to share your thoughts, suggestions, and bug reports with us. Feel free to post your feedback directly in this thread, or if you prefer, you can raise issues on GitHub. Your input is invaluable to us, and together, we can shape Prism into the go-to tool for data workflow orchestration.
- Prism - a lightweight, yet powerful data orchestration platform in Python. Accessible via CLI
What are some alternatives?
datacompy - Pandas, Polars, Spark, and Snowpark DataFrame comparison for humans and more!
multiwoven - 🔥🔥🔥 Open source composable CDP - alternative to hightouch and census.
cuallee - Possibly the fastest DataFrame-agnostic quality check library in town.
paradedb - ParadeDB is a modern Elasticsearch alternative built on Postgres. Built for real-time, update-heavy workloads.
soda-core - :zap: Data quality testing for the modern data stack (SQL, Spark, and Pandas) https://www.soda.io
workshop-realtime-data-pipelines - You will inspect and run a sample architecture making use of Apache Pulsar™ and Pulsar Functions for real-time, event-streaming-based data ingestion, cleaning and processing.