data-diff VS orchest

Compare data-diff vs orchest and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
data-diff orchest
20 44
2,842 4,020
3.0% 0.2%
9.4 4.5
14 days ago 11 months ago
Python TypeScript
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

data-diff

Posts with mentions or reviews of data-diff. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-26.
  • How to Check 2 SQL Tables Are the Same
    8 projects | news.ycombinator.com | 26 Jul 2023
    If the issue happen a lot, there is also: https://github.com/datafold/data-diff

    That is a nice tool to do it cross database as well.

    I think it's based on checksum method.

  • Oops, I wrote yet another SQLAlchemy alternative (looking for contributors!)
    4 projects | /r/pythoncoding | 8 May 2023
    First, let me introduce myself. My name is Erez. You may know some of the Python libraries I wrote in the past: Lark, Preql and Data-diff.
  • Looking for Unit Testing framework in Database Migration Process
    3 projects | /r/dataengineering | 23 Mar 2023
    https://github.com/datafold/data-diff might be worth a look
  • Ask HN: How do you test SQL?
    18 projects | news.ycombinator.com | 31 Jan 2023
    I did data engineering for 6 years and am building a company to automate SQL validation for dbt users.

    First, by “testing SQL pipelines”, I assume you mean testing changes to SQL code as part of the development workflow? (vs. monitoring pipelines in production for failures / anomalies).

    If so:

    1 – assertions. dbt comes with a solid built-in testing framework [1] for expressing assertions such as “this column should have values in the list [A,B,C]” as well checking referential integrity, uniqueness, nulls, etc. There are more advanced packages on top of dbt tests [2]. The problem with assertion testing in general though is that for a moderately complex data pipeline, it’s infeasible to achieve test coverage that would cover most possible failure scenarios.

    2 – data diff: for every change to SQL, know exactly how the code change affects the output data by comparing the data in dev/staging (built off the dev branch code) with the data in production (built off the main branch). We built an open-source tool for that: https://github.com/datafold/data-diff, and we are adding an integration with dbt soon which will make diffing as part of dbt development workflow one command away [2]

    We make money by selling a Cloud solution for teams that integrates data diff into Github/Gitlab CI and automatically diffs every pull request to tell you the how a change to SQL affects the target table you changed, downstream tables and dependent BI tools (video demo: [3])

    I’ve also written about why reliable change management is so important for data engineering and what are key best practices to implement [4]

    [1] https://docs.getdbt.com/docs/build/tests

  • Data-diff v0.3: DuckDB, efficient in-database diffing and more
    1 project | news.ycombinator.com | 15 Dec 2022
    Hi HN:

    We at Datafold are excited to announce a new release of data-diff (https://github.com/datafold/data-diff), an open-source tool that efficiently compares tables within or across a wide range of SQL databases. This release includes a lot of new features, improvements and bugfixes.

    We released the first version 6 months ago because we believe that diffing data is as fundamental of a capability as diffing code in data engineering workflows. Over the past few months, we have seen data-diff being adopted for a variety of use-cases, such as validating migration and replication of data between databases (diffing source and target) and tracking the effects of code changes on data (diffing staging/dev and production environments).

    With this new release data-diff is significantly faster at comparing tables within the same database, especially when there are a lot of differences between the tables. We've also added the ability to materialize the diff results into a database table, in addition to (or instead of) outputting them to stdout. We've added support for DuckDB, and for diffing schemas. Improved support for alphanumerics, and threading, and generally improved the API, the command-line interface, and stability of the tool.

    We believe that data-diff is a valuable addition to the open source community, and we are committed to continue growing it and the community around it. We encourage you to try it out and let us know what you think!

    You can read more about data-diff on our GitHub page at the following link: https://github.com/datafold/data-diff/

    To see the list of changes for the 0.3.0 release, go here: https://github.com/datafold/data-diff/releases/tag/v0.3.0

  • data-diff VS cuallee - a user suggested alternative
    2 projects | 30 Nov 2022
  • Compare identical tables across databases to identify data differences (Oracle 19c)
    1 project | /r/SQL | 26 Oct 2022
  • How to test Data Ingestion Pipeline
    1 project | /r/dataengineering | 26 Sep 2022
    For data mismatches, check out data-diff https://github.com/datafold/data-diff
  • Data migration - easier way to compare legacy with new environment?
    1 project | /r/dataengineering | 6 Sep 2022
  • Show HN: Open-source infra for building embedded data pipelines
    2 projects | news.ycombinator.com | 1 Sep 2022
    Looks useful! Do you have a way to validate that the data was copied correctly and entirely? If not, you might want to consider integrating data-diff for that - https://github.com/datafold/data-diff

orchest

Posts with mentions or reviews of orchest. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-12-06.
  • Decent low code options for orchestration and building data flows?
    1 project | /r/dataengineering | 23 Dec 2022
    You can check out our OSS https://github.com/orchest/orchest
  • Build ML workflows with Jupyter notebooks
    1 project | /r/programming | 23 Dec 2022
  • Building container images in Kubernetes, how would you approach it?
    2 projects | /r/kubernetes | 6 Dec 2022
    The code example is part of our ELT/data pipeline tool called Orchest: https://github.com/orchest/orchest/
  • Launch HN: Patterns (YC S21) – A much faster way to build and deploy data apps
    6 projects | news.ycombinator.com | 30 Nov 2022
    First want to say congrats to the Patterns team for creating a gorgeous looking tool. Very minimal and approachable. Massive kudos!

    Disclaimer: we're building something very similar and I'm curious about a couple of things.

    One of the questions our users have asked us often is how to minimize the dependence on "product specific" components/nodes/steps. For example, if you write CI for GitHub Actions you may use a bunch of GitHub Action references.

    Looking at the `graph.yml` in some of the examples you shared you use a similar approach (e.g. patterns/openai-completion@v4). That means that whenever you depend on such components your automation/data pipeline becomes more tied to the specific tool (GitHub Actions/Patterns), effectively locking in users.

    How are you helping users feel comfortable with that problem (I don't want to invest in something that's not portable)? It's something we've struggled with ourselves as we're expanding the "out of the box" capabilities you get.

    Furthermore, would have loved to see this as an open source project. But I guess the second best thing to open source is some open source contributions and `dcp` and `common-model` look quite interesting!

    For those who are curious, I'm one of the authors of https://github.com/orchest/orchest

  • Argo became a graduated CNCF project
    3 projects | /r/kubernetes | 27 Nov 2022
    Haven't tried it. In its favor, Argo is vendor neutral and is really easy to set up in a local k8s environment like docker for desktop or minikube. If you already use k8s for configuration, service discovery, secret management, etc, it's dead simple to set up and use (avoiding configuration having to learn a whole new workflow configuration language in addition to k8s). The big downside is that it doesn't have a visual DAG editor (although that might be a positive for engineers having to fix workflows written by non-programmers), but the relatively bare-metal nature of Argo means that it's fairly easy to use it as an underlying engine for a more opinionated or lower-code framework (orchest is a notable one out now).
  • Ideas for infrastructure and tooling to use for frequent model retraining?
    1 project | /r/mlops | 9 Sep 2022
  • Looking for a mentor in MLOps. I am a lead developer.
    1 project | /r/mlops | 25 Aug 2022
    If you’d like to try something for you data workflows that’s vendor agnostic (k8s based) and open source you can check out our project: https://github.com/orchest/orchest
  • Is there a good way to trigger data pipelines by event instead of cron?
    1 project | /r/dataengineering | 23 Aug 2022
    You can find it here: https://github.com/orchest/orchest Convenience install script: https://github.com/orchest/orchest#installation
  • How do you deal with parallelising parts of an ML pipeline especially on Python?
    5 projects | /r/mlops | 12 Aug 2022
    We automatically provide container level parallelism in Orchest: https://github.com/orchest/orchest
  • Launch HN: Sematic (YC S22) – Open-source framework to build ML pipelines faster
    1 project | news.ycombinator.com | 10 Aug 2022
    For people in this thread interested in what this tool is an alternative to: Airflow, Luigi, Kubeflow, Kedro, Flyte, Metaflow, Sagemaker Pipelines, GCP Vertex Workbench, Azure Data Factory, Azure ML, Dagster, DVC, ClearML, Prefect, Pachyderm, and Orchest.

    Disclaimer: author of Orchest https://github.com/orchest/orchest

What are some alternatives?

When comparing data-diff and orchest you can also consider the following projects:

datacompy - Pandas and Spark DataFrame comparison for humans and more!

docker-airflow - Docker Apache Airflow

cuallee - Possibly the fastest DataFrame-agnostic quality check library in town.

hookdeck-cli - Manage your Hookdeck workspaces, connections, transformations, filters, and more with the Hookdeck CLI

dbt-unit-testing - This dbt package contains macros to support unit testing that can be (re)used across dbt projects.

ploomber - The fastest ⚡️ way to build data pipelines. Develop iteratively, deploy anywhere. ☁️

sqeleton

n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.

great_expectations - Always know what to expect from your data.

label-studio - Label Studio is a multi-type data labeling and annotation tool with standardized output format

soda-core - :zap: Data quality testing for the modern data stack (SQL, Spark, and Pandas) https://www.soda.io

Node RED - Low-code programming for event-driven applications