datacompy VS data-diff

Compare datacompy vs data-diff and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
datacompy data-diff
4 20
382 2,842
8.9% 3.0%
7.4 9.4
5 days ago 11 days ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

datacompy

Posts with mentions or reviews of datacompy. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-26.

data-diff

Posts with mentions or reviews of data-diff. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-26.
  • How to Check 2 SQL Tables Are the Same
    8 projects | news.ycombinator.com | 26 Jul 2023
    If the issue happen a lot, there is also: https://github.com/datafold/data-diff

    That is a nice tool to do it cross database as well.

    I think it's based on checksum method.

  • Oops, I wrote yet another SQLAlchemy alternative (looking for contributors!)
    4 projects | /r/pythoncoding | 8 May 2023
    First, let me introduce myself. My name is Erez. You may know some of the Python libraries I wrote in the past: Lark, Preql and Data-diff.
  • Looking for Unit Testing framework in Database Migration Process
    3 projects | /r/dataengineering | 23 Mar 2023
    https://github.com/datafold/data-diff might be worth a look
  • Ask HN: How do you test SQL?
    18 projects | news.ycombinator.com | 31 Jan 2023
    I did data engineering for 6 years and am building a company to automate SQL validation for dbt users.

    First, by “testing SQL pipelines”, I assume you mean testing changes to SQL code as part of the development workflow? (vs. monitoring pipelines in production for failures / anomalies).

    If so:

    1 – assertions. dbt comes with a solid built-in testing framework [1] for expressing assertions such as “this column should have values in the list [A,B,C]” as well checking referential integrity, uniqueness, nulls, etc. There are more advanced packages on top of dbt tests [2]. The problem with assertion testing in general though is that for a moderately complex data pipeline, it’s infeasible to achieve test coverage that would cover most possible failure scenarios.

    2 – data diff: for every change to SQL, know exactly how the code change affects the output data by comparing the data in dev/staging (built off the dev branch code) with the data in production (built off the main branch). We built an open-source tool for that: https://github.com/datafold/data-diff, and we are adding an integration with dbt soon which will make diffing as part of dbt development workflow one command away [2]

    We make money by selling a Cloud solution for teams that integrates data diff into Github/Gitlab CI and automatically diffs every pull request to tell you the how a change to SQL affects the target table you changed, downstream tables and dependent BI tools (video demo: [3])

    I’ve also written about why reliable change management is so important for data engineering and what are key best practices to implement [4]

    [1] https://docs.getdbt.com/docs/build/tests

  • Data-diff v0.3: DuckDB, efficient in-database diffing and more
    1 project | news.ycombinator.com | 15 Dec 2022
    Hi HN:

    We at Datafold are excited to announce a new release of data-diff (https://github.com/datafold/data-diff), an open-source tool that efficiently compares tables within or across a wide range of SQL databases. This release includes a lot of new features, improvements and bugfixes.

    We released the first version 6 months ago because we believe that diffing data is as fundamental of a capability as diffing code in data engineering workflows. Over the past few months, we have seen data-diff being adopted for a variety of use-cases, such as validating migration and replication of data between databases (diffing source and target) and tracking the effects of code changes on data (diffing staging/dev and production environments).

    With this new release data-diff is significantly faster at comparing tables within the same database, especially when there are a lot of differences between the tables. We've also added the ability to materialize the diff results into a database table, in addition to (or instead of) outputting them to stdout. We've added support for DuckDB, and for diffing schemas. Improved support for alphanumerics, and threading, and generally improved the API, the command-line interface, and stability of the tool.

    We believe that data-diff is a valuable addition to the open source community, and we are committed to continue growing it and the community around it. We encourage you to try it out and let us know what you think!

    You can read more about data-diff on our GitHub page at the following link: https://github.com/datafold/data-diff/

    To see the list of changes for the 0.3.0 release, go here: https://github.com/datafold/data-diff/releases/tag/v0.3.0

  • data-diff VS cuallee - a user suggested alternative
    2 projects | 30 Nov 2022
  • Compare identical tables across databases to identify data differences (Oracle 19c)
    1 project | /r/SQL | 26 Oct 2022
  • How to test Data Ingestion Pipeline
    1 project | /r/dataengineering | 26 Sep 2022
    For data mismatches, check out data-diff https://github.com/datafold/data-diff
  • Data migration - easier way to compare legacy with new environment?
    1 project | /r/dataengineering | 6 Sep 2022
  • Show HN: Open-source infra for building embedded data pipelines
    2 projects | news.ycombinator.com | 1 Sep 2022
    Looks useful! Do you have a way to validate that the data was copied correctly and entirely? If not, you might want to consider integrating data-diff for that - https://github.com/datafold/data-diff

What are some alternatives?

When comparing datacompy and data-diff you can also consider the following projects:

koalas - Koalas: pandas API on Apache Spark

cuallee - Possibly the fastest DataFrame-agnostic quality check library in town.

data-science-ipython-notebooks - Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.

dbt-unit-testing - This dbt package contains macros to support unit testing that can be (re)used across dbt projects.

dbt-audit-helper - Useful macros when performing data audits

sqeleton

visualiza - A general-purpose dynamic data visualizer.

great_expectations - Always know what to expect from your data.

popmon - Monitor the stability of a Pandas or Spark dataframe ⚙︎

soda-core - :zap: Data quality testing for the modern data stack (SQL, Spark, and Pandas) https://www.soda.io

diffable-sql

Preql - An interpreted relational query language that compiles to SQL.