ploomber VS Kedro

Compare ploomber vs Kedro and see what are their differences.

Kedro

Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular. (by kedro-org)
Judoscale - Save 47% on cloud hosting with autoscaling that just works
Judoscale integrates with Django, FastAPI, Celery, and RQ to make autoscaling easy and reliable. Save big, and say goodbye to request timeouts and backed-up task queues.
judoscale.com
featured
CodeRabbit: AI Code Reviews for Developers
Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
coderabbit.ai
featured
ploomber Kedro
121 33
3,557 10,276
0.2% 1.0%
6.4 9.4
7 months ago 8 days ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ploomber

Posts with mentions or reviews of ploomber. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-06.
  • Show HN: JupySQL – a SQL client for Jupyter (ipython-SQL successor)
    2 projects | news.ycombinator.com | 6 Dec 2023
    - One-click sharing powered by Ploomber Cloud: https://ploomber.io

    Documentation: https://jupysql.ploomber.io

    Note that JupySQL is a fork of ipython-sql; which is no longer actively developed. Catherine, ipython-sql's creator, was kind enough to pass the project to us (check out ipython-sql's README).

    We'd love to learn what you think and what features we can ship for JupySQL to be the best SQL client! Please let us know in the comments!

  • Runme – Interactive Runbooks Built with Markdown
    7 projects | news.ycombinator.com | 24 Aug 2023
    For those who don't know, Jupyter has a bash kernel: https://github.com/takluyver/bash_kernel

    And you can run Jupyter notebooks from the CLI with Ploomber: https://github.com/ploomber/ploomber

  • Rant: Jupyter notebooks are trash.
    6 projects | /r/datascience | 24 Jan 2023
    Develop notebook-based pipelines
  • Who needs MLflow when you have SQLite?
    5 projects | news.ycombinator.com | 16 Nov 2022
    Fair point. MLflow has a lot of features to cover the end-to-end dev cycle. This SQLite tracker only covers the experiment tracking part.

    We have another project to cover the orchestration/pipelines aspect: https://github.com/ploomber/ploomber and we have plans to work on the rest of features. For now, we're focusing on those two.

  • New to large SW projects in Python, best practices to organize code
    1 project | /r/Python | 11 Nov 2022
    I recommend taking a look at the ploomber open source. It helps you structure your code and parameterize it in a way that's easier to maintain and test. Our blog has lots of resources about it from testing your code to building a data science platform on AWS.
  • A three-part series on deploying a Data Science Platform on AWS
    1 project | /r/dataengineering | 4 Nov 2022
    Developing end-to-end data science infrastructure can get complex. For example, many of us might have struggled to try to integrate AWS services and deal with configuration, permissions, etc. At Ploomber, we’ve worked with many companies in a wide range of industries, such as energy, entertainment, computational chemistry, and genomics, so we are constantly looking for simple solutions to get them started with Data Science in the cloud.
  • Ploomber Cloud - Parametrizing and running notebooks in the cloud in parallel
    3 projects | /r/IPython | 3 Nov 2022
  • Is Colab still the place to go?
    1 project | /r/deeplearning | 2 Nov 2022
    If you like working locally with notebooks, you can run via the free tier of ploomber, that'll allow you to get the Ram/Compute you need for the bigger models as part of the free tier. Also, it has the historical executions so you don't need to remember what you executed an hour later!
  • Alternatives to nextflow?
    6 projects | /r/bioinformatics | 26 Oct 2022
    It really depends on your use cases, I've seen a lot of those tools that lock you into a certain syntax, framework or weird language (for instance Groovy). If you'd like to use core python or Jupyter notebooks I'd recommend Ploomber, the community support is really strong, there's an emphasis on observability and you can deploy it on any executor like Slurm, AWS Batch or Airflow. In addition, there's a free managed compute (cloud edition) where you can run certain bioinformatics flows like Alphafold or Cripresso2
  • Saving log files
    1 project | /r/docker | 26 Oct 2022
    That's what we do for lineage with https://ploomber.io/

Kedro

Posts with mentions or reviews of Kedro. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-11-13.
  • 20 Open Source Tools I Recommend to Build, Share, and Run AI Projects
    11 projects | dev.to | 13 Nov 2024
    Kedro is an ML development framework that brings data science projects from pilot development to production by creating reproducible, maintainable, and modular data science code. Kedro has a data catalog for data handling, support pipeline building, and a standardized template for code maintainability and consistency to effectively do this. Its data catalog uses lightweight data connectors to manage and track datasets. This allows you to use the same pipeline to build multiple production-level codes across your system.
  • Kedro – An open-source framework for data science code
    1 project | news.ycombinator.com | 17 Aug 2024
  • 10 Open Source MLOps Projects You Didn’t Know About
    12 projects | dev.to | 1 Aug 2024
    Kedro A serious problem with machine learning projects is the complex process involved in taking models from development to production. Kedro is an open source tool that solves this problem by employing software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
  • 25 Open Source AI Tools to Cut Your Development Time in Half
    8 projects | dev.to | 11 Jul 2024
    Kedro is an ML development framework for creating reproducible, maintainable, modular data science code. Kedro improves AI project development experience via data abstraction and code organization. Using lightweight data connectors, it provides a centralized data catalog to manage and track datasets throughout a project. This enables data scientists to focus on building production level code through Kedro's data pipelines, enabling other stakeholders to use the same pipelines in different parts of the system.
  • Nextflow: Data-Driven Computational Pipelines
    9 projects | news.ycombinator.com | 10 Aug 2023
    Interesting, thanks for sharing. I'll definitely take a look, although at this point I am so comfortable with Snakemake, it is a bit hard to imagine what would convince me to move to another tool. But I like the idea of composable pipelines: I am building a tool (too early to share) that would allow to lay Snakemake pipelines on top of each other using semi-automatic data annotations similar to how it is done in kedro (https://github.com/kedro-org/kedro).
  • A Polars exploration into Kedro
    6 projects | dev.to | 17 May 2023
    # pyproject.toml [project] dependencies = [ "kedro @ git+https://github.com/kedro-org/kedro@3ea7231", "kedro-datasets[pandas.CSVDataSet,polars.CSVDataSet] @ git+https://github.com/kedro-org/kedro-plugins@3b42fae#subdirectory=kedro-datasets", ]
  • What are some open-source ML pipeline managers that are easy to use?
    7 projects | /r/mlops | 3 May 2023
    So there's 2 sides to pipeline management: the actual definition of the pipelines (in code) and how/when/where you run them. Some tools like prefect or airflow do both of them at once, but for the actual pipeline definition I'm a fan of https://kedro.org. You can then use most available orchestrators to run those pipelines on whatever schedule and architecture you want.
  • How do data scientists combine Kedro and Databricks?
    1 project | dev.to | 19 Apr 2023
    We have set up a milestone on GitHub so you can check in on our progress and contribute if you want to. To suggest features to us, report bugs, or just see what we're working on right now, visit the Kedro projects on GitHub.
  • How do you organize yourself during projects?
    1 project | /r/learnmachinelearning | 28 Mar 2023
    you could use a project framework like kedro to force you to be more disciplined about how you structure your projects. I'd also recommend checking out this book: Edna Ridge - Guerrilla Analytics: A Practical Approach to Working with Data
  • Futuristic documentation systems in Python, part 1: aiming for more
    3 projects | dev.to | 14 Mar 2023
    Recently I started a position as Developer Advocate for Kedro, an opinionated data science framework, and one of the things we're doing is exploring what are the best open source tools we can use to create our documentation.

What are some alternatives?

When comparing ploomber and Kedro you can also consider the following projects:

orchest - Build data pipelines, the easy way 🛠️

Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows

papermill - 📚 Parameterize, execute, and analyze notebooks

luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

nbdev - Create delightful software with Jupyter Notebooks

Dask - Parallel computing with task scheduling

Judoscale - Save 47% on cloud hosting with autoscaling that just works
Judoscale integrates with Django, FastAPI, Celery, and RQ to make autoscaling easy and reliable. Save big, and say goodbye to request timeouts and backed-up task queues.
judoscale.com
featured
CodeRabbit: AI Code Reviews for Developers
Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
coderabbit.ai
featured