Kedro VS grouparoo

Compare Kedro vs grouparoo and see what are their differences.

Kedro

Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular. (by kedro-org)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
Kedro grouparoo
29 27
9,341 607
1.3% -
9.7 9.9
2 days ago about 2 years ago
Python JavaScript
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Kedro

Posts with mentions or reviews of Kedro. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-10.
  • Nextflow: Data-Driven Computational Pipelines
    9 projects | news.ycombinator.com | 10 Aug 2023
    Interesting, thanks for sharing. I'll definitely take a look, although at this point I am so comfortable with Snakemake, it is a bit hard to imagine what would convince me to move to another tool. But I like the idea of composable pipelines: I am building a tool (too early to share) that would allow to lay Snakemake pipelines on top of each other using semi-automatic data annotations similar to how it is done in kedro (https://github.com/kedro-org/kedro).
  • A Polars exploration into Kedro
    6 projects | dev.to | 17 May 2023
    # pyproject.toml [project] dependencies = [ "kedro @ git+https://github.com/kedro-org/kedro@3ea7231", "kedro-datasets[pandas.CSVDataSet,polars.CSVDataSet] @ git+https://github.com/kedro-org/kedro-plugins@3b42fae#subdirectory=kedro-datasets", ]
  • What are some open-source ML pipeline managers that are easy to use?
    7 projects | /r/mlops | 3 May 2023
    So there's 2 sides to pipeline management: the actual definition of the pipelines (in code) and how/when/where you run them. Some tools like prefect or airflow do both of them at once, but for the actual pipeline definition I'm a fan of https://kedro.org. You can then use most available orchestrators to run those pipelines on whatever schedule and architecture you want.
  • Futuristic documentation systems in Python, part 1: aiming for more
    3 projects | dev.to | 14 Mar 2023
    Recently I started a position as Developer Advocate for Kedro, an opinionated data science framework, and one of the things we're doing is exploring what are the best open source tools we can use to create our documentation.
  • Python projects with best practices on Github?
    23 projects | /r/Python | 14 Feb 2023
    You can also check out Kedro, it’s like the Flask for data science projects and helps apply clean code principles to data science code.
  • What are examples of well-organized data science project that I can see on Github?
    6 projects | /r/datascience | 5 Nov 2022
  • Dabbling with Dagster vs. Airflow
    7 projects | news.ycombinator.com | 14 Sep 2022
    An often overlooked framework used by NASA among others is Kedro https://github.com/kedro-org/kedro. Kedro is probably the simplest set of abstractions for building pipelines but it doesn't attempt to kill Airflow. It even has an Airflow plugin that allows it to be used as a DSL for building Airflow pipelines or plug into whichever production orchestration system is needed.
  • What are some good DS/ML repos where I can learn about structuring a DS/ML project?
    3 projects | /r/datascience | 27 Feb 2022
    For the lazy ones out there, here's the link to their github repo.
  • Kedro – Creating reproducible, maintainable and modular data science code
    4 projects | news.ycombinator.com | 22 Jan 2022
  • [Discussion] Applied machine learning implementation debate. Is OOP approach towards data preprocessing in python an overkill?
    3 projects | /r/MachineLearning | 3 Nov 2021
    I'd focus more on understanding the issues in depth, before jumping to a solution. Otherwise, you would be adding hassle with some - bluntly speaking - opinionated and inflexible boilerplate code which not many people will like using. You mention some issues: non-obvious to understand code and hard to execute and replicate. Bad code which is not following engineering best practices (ideas from SOLID etc.) does not get better if you force the author to introduce certain classes. You can suggest some basics (e.g. common code formatter, meaningful variables names, short functions, no hard-coded values, ...), but I'm afraid you cannot educate non-engineers in a single day workshop. I would not focus on that at first. However, there is no excuse for writing bad code and then expecting others to fix. As you say, data engineering is part of data science skills, you are "junior" if you cannot write reproducible code. Being hard to execute and replicate is theoretically easy to fix. Force everyone to (at least hypothetically) submit their code into a testing environment where it will be automatically executed on a fresh machine. This will mean that at first they have to exactly specify all libraries that need to be installed. Second, they need to externalize all configuration - in particular data input and data output paths. Not a single value should be hard-coded in code! And finally they need a *single* command which can be run to execute the whole(!) pipeline. If they fail on any of these parts... they should try again. Work that does not pass this test is considered unfinished by the author. Basically you are introducing an automated, infallible test. Regarding your code, I'd really not try that direction. In particular even these few lines already look unclear and over-engineered. The csv format is already hard-coded into the code. If it changes to parquet you'd have to touch the code. The processing object has data paths fixed for which is no reason in a job which should take care of pure processing. Export data is also not something that a processing job should handle. And what if you have multiple input and output data? You would not have all these issues if you had kept to most simple solution to have a function `process(data1, data2, ...) -> result_data` where dataframes are passed in and out. It would also mean to have zero additional libraries or boilerplate. I highly doubt that a function `main_pipe(...)` will fix the malpractices some people may do. There are two small feature which are useful beyond a plain function though: automatically generating a visual DAG from the code and quick checking if input requirements are satisfied before heavy code is run. You can still put any mature DAG library on top, which probably already includes experience from a lot of developers. Not need to rewrite that. I'm not sure which one is best (metaflow, luigi, airflow, ... https://github.com/pditommaso/awesome-pipeline no idea), but many come with a lot of features. If you want a bit more scaffolding to easier understand foreign projects, you could look at https://github.com/quantumblacklabs/kedro but maybe that's already too much. Fix the "single command replication-from-scratch requirement" first.

grouparoo

Posts with mentions or reviews of grouparoo. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-03-03.
  • Reference Data Stack for Data-Driven Startups
    8 projects | dev.to | 3 Mar 2022
    There are other tools that we will have to adopt in the future but haven’t yet due to lack of necessity. Specifically, one category that is popular in modern data stacks is Reverse ETL (Hightouch, Census, or Grouparoo). We currently don’t have a usecase for piping data back into 3rd party tools but it will definitely come up in the future.
  • Data pipeline suggestions
    13 projects | /r/dataengineering | 4 Feb 2022
    Reverse ETL: Grouparoo, Castled
  • Where can I find free data engineering ( big data) projects online?
    14 projects | /r/dataengineering | 27 Jan 2022
    Ingestion / ETL: Airbyte, Singer, Jitsu Transformation: dbt Orchestration: Airflow, Dagster Testing: GreatExpectations Observability: Monosi Reverse ETL: Grouparoo, Castled Visualization: Lightdash, Superset
  • Ask HN: Who is hiring? (December 2021)
    37 projects | news.ycombinator.com | 1 Dec 2021
    Grouparoo | Remote (US) | Remote-OK | https://www.grouparoo.com

    Grouparoo is a venture-backed software company building open source data tools that make data reliable, accessible, and actionable. We’re empowering teams to make great customer experiences, driven by data. While engineering teams have gotten good at storing and generating data about their customers, it’s rare that this data is used to its full potential in external applications. Grouparoo makes these integrations easy by providing a framework for defining your customer data and reliably syncing it to external tools.

    To learn more about who we are, our engineering culture, and whether this is the right place for you, read our Key Values profile: https://www.keyvalues.com/grouparoo

    Here are our open roles:

    - Senior Backend / Lead Engineer: https://jobs.lever.co/grouparoo/6ba485d1-a5a4-41f0-9fa5-920a...

    - Developer Advocate: https://jobs.lever.co/grouparoo/5e1531b4-7ec8-4c10-8e52-fc23...

    Tech Stack: TypeScript / Javascript / Node.js, ActionHero, React + Next.js, Postgres & Redis, and whole lot of third-party APIs!

  • Launch HN: Hightouch (YC S19) – Sync data from data warehouses to SaaS tools
    2 projects | news.ycombinator.com | 11 Nov 2021
    Congrats on the launch! Hightouch looks great and this need is real. Things seem to be going well, so I don't think I'm taking too much away by mentioning that we have been been working on Grouparoo, an open source alternative that solves similar pain points.

    A few differences: git developer workflow focused (branches, CI, PRs, etc), ability to self host, segmentation in destinations (tagging people in mailchimp based on rules, for example)

    https://www.grouparoo.com

  • Ask HN: Who is hiring? (August 2021)
    14 projects | news.ycombinator.com | 2 Aug 2021
    Grouparoo | Remote (US) | Remote-OK | https://www.grouparoo.com

    Grouparoo is a venture-backed software company building the open-source reverse-ETL framework that makes it easy to have meaningful, data-driven conversations with customers. Do you want to keep product data in-sync with tools like Hubspot, Marketo or Zendesk? Do you want to be able to build, test, and deploy data sync code just like the rest of your tech stack? That’s the kind of thing Grouparoo does.

    We started Grouparoo because we are done saying “no” to marketing teams asking for data and want make is easy (and safe!) for everyone to us the data available at work. We are looking for a seasoned back-end engineer to join our US-based, fully remote team. The main components of our stack are Typescript/Javascript, Actionhero, Next.js, and React. Learn more about the position @ https://www.grouparoo.com/jobs and https://www.keyvalues.com/grouparoo. Check out our open-source framework (and see what you will be working on) @ https://github.com/grouparoo/grouparoo

  • Ask HN: Who is hiring? (July 2021)
    33 projects | news.ycombinator.com | 1 Jul 2021
    Grouparoo | Remote (US) | Remote-OK | https://www.grouparoo.com

    Grouparoo is a venture-backed software company building the open-source reverse-ETL framework that makes it easy to have meaningful, data-driven conversations with customers. Do you want to keep product data in-sync with tools like Hubspot, Marketo or Zendesk? Do you want to be able to build, test, and deploy data sync code just like the rest of your stack? That’s the kind of thing Grouparoo does.

    We started Grouparoo because we are done saying “no” to marketing teams asking for data and want make is easy (and safe!) for everyone to us the data available at work. We are looking for 2 seasoned engineers to join our US-based, fully remote team. The main components of our stack are Typescript/Javascript, Actionhero, Next.js, and React. Learn more about the positions @ https://www.grouparoo.com/jobs and https://www.keyvalues.com/grouparoo. Check out our open-source framework (and see what you will be working on) @ https://github.com/grouparoo/grouparoo

    Here are our open roles:

    * Senior Backend / Founding Engineer: https://jobs.lever.co/grouparoo/6ba485d1-a5a4-41f0-9fa5-920a...

    * Senior Full Stack / Lead Engineer: https://jobs.lever.co/grouparoo/946e3407-6101-45f1-84a8-135d...

    * Founding Community Manager / Developer Advocate: https://jobs.lever.co/grouparoo/19ef1a6b-6ad9-49f6-8512-90e3...

    Tech Stack: TypeScript / Javascript / Node.js, ActionHero, React + Next.js, Postgres & Redis, and whole lot of third-party APIs!

  • Bundling and Distributing Next.js Sites via NPM
    2 projects | dev.to | 4 Jun 2021
    The final thing we learned is that while the contents of the .next directory are needed for your visitors, not everything is needed. We saw that we were shipping 300mb packages to NPM for our Next.js UIs. We dug into the .next folder and learned that if you opt-into Webpack v5 for your Next.js site, large .next/cache/*.pack files will be created to speed up how Webpack works. This is normal behavior, but we were inadvertently publishing these large files to NPM! We added the .next/cache/* directory to our .npmignore and our build sizes went down to a more reasonable 20mb.
  • Using Typescript to create a Robust API between your frontend and backend
    3 projects | dev.to | 19 May 2021
    The Grouparoo Application is stored in a monorepo, which means that the frontend and backend code always exist side-by-side. This means that we can reference the API code from our Frontend code, and make a helper to check our response types. We don't need our API code at run-time, but we can import the types from it as we develop and compile the app to Javascript.
  • Deferring Side-Effects in Node.js until the End of a Transaction
    5 projects | dev.to | 17 May 2021
    Looking deeper into how cls-hooked works, we can see that it is possible to tell if you are currently in a namespace, and to set and get values from the namespace. Think of this like a session... but for the callback or promise your code is within! With this in mind, we can write our run method to be transaction-aware. This means that we can use a pattern that knows to run a function in-line if we aren’t within a transaction, but if we are, defer it until the end. We’ve wrapped utilities to do this within Grouparoo’s CLS module.

What are some alternatives?

When comparing Kedro and grouparoo you can also consider the following projects:

Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows

luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

Dask - Parallel computing with task scheduling

cookiecutter-pytorch - A Cookiecutter template for PyTorch Deep Learning projects.

ploomber - The fastest ⚡️ way to build data pipelines. Develop iteratively, deploy anywhere. ☁️

BentoML - The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!

lightning-bolts - Toolbox of models, callbacks, and datasets for AI/ML researchers.

Pinball

cookiecutter-data-science - A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.

bcolz - A columnar data container that can be compressed.

label-studio - Label Studio is a multi-type data labeling and annotation tool with standardized output format

flyte - Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.