Kedro VS BentoML

Compare Kedro vs BentoML and see what are their differences.

Kedro

Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular. (by kedro-org)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
Kedro BentoML
29 16
9,288 6,416
1.4% 3.5%
9.7 9.8
6 days ago 8 days ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Kedro

Posts with mentions or reviews of Kedro. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-10.
  • Nextflow: Data-Driven Computational Pipelines
    9 projects | news.ycombinator.com | 10 Aug 2023
    Interesting, thanks for sharing. I'll definitely take a look, although at this point I am so comfortable with Snakemake, it is a bit hard to imagine what would convince me to move to another tool. But I like the idea of composable pipelines: I am building a tool (too early to share) that would allow to lay Snakemake pipelines on top of each other using semi-automatic data annotations similar to how it is done in kedro (https://github.com/kedro-org/kedro).
  • A Polars exploration into Kedro
    6 projects | dev.to | 17 May 2023
    # pyproject.toml [project] dependencies = [ "kedro @ git+https://github.com/kedro-org/kedro@3ea7231", "kedro-datasets[pandas.CSVDataSet,polars.CSVDataSet] @ git+https://github.com/kedro-org/kedro-plugins@3b42fae#subdirectory=kedro-datasets", ]
  • What are some open-source ML pipeline managers that are easy to use?
    7 projects | /r/mlops | 3 May 2023
    So there's 2 sides to pipeline management: the actual definition of the pipelines (in code) and how/when/where you run them. Some tools like prefect or airflow do both of them at once, but for the actual pipeline definition I'm a fan of https://kedro.org. You can then use most available orchestrators to run those pipelines on whatever schedule and architecture you want.
  • Futuristic documentation systems in Python, part 1: aiming for more
    3 projects | dev.to | 14 Mar 2023
    Recently I started a position as Developer Advocate for Kedro, an opinionated data science framework, and one of the things we're doing is exploring what are the best open source tools we can use to create our documentation.
  • Python projects with best practices on Github?
    23 projects | /r/Python | 14 Feb 2023
    You can also check out Kedro, it’s like the Flask for data science projects and helps apply clean code principles to data science code.
  • What are examples of well-organized data science project that I can see on Github?
    6 projects | /r/datascience | 5 Nov 2022
  • Dabbling with Dagster vs. Airflow
    7 projects | news.ycombinator.com | 14 Sep 2022
    An often overlooked framework used by NASA among others is Kedro https://github.com/kedro-org/kedro. Kedro is probably the simplest set of abstractions for building pipelines but it doesn't attempt to kill Airflow. It even has an Airflow plugin that allows it to be used as a DSL for building Airflow pipelines or plug into whichever production orchestration system is needed.
  • What are some good DS/ML repos where I can learn about structuring a DS/ML project?
    3 projects | /r/datascience | 27 Feb 2022
    For the lazy ones out there, here's the link to their github repo.
  • Kedro – Creating reproducible, maintainable and modular data science code
    4 projects | news.ycombinator.com | 22 Jan 2022
  • [Discussion] Applied machine learning implementation debate. Is OOP approach towards data preprocessing in python an overkill?
    3 projects | /r/MachineLearning | 3 Nov 2021
    I'd focus more on understanding the issues in depth, before jumping to a solution. Otherwise, you would be adding hassle with some - bluntly speaking - opinionated and inflexible boilerplate code which not many people will like using. You mention some issues: non-obvious to understand code and hard to execute and replicate. Bad code which is not following engineering best practices (ideas from SOLID etc.) does not get better if you force the author to introduce certain classes. You can suggest some basics (e.g. common code formatter, meaningful variables names, short functions, no hard-coded values, ...), but I'm afraid you cannot educate non-engineers in a single day workshop. I would not focus on that at first. However, there is no excuse for writing bad code and then expecting others to fix. As you say, data engineering is part of data science skills, you are "junior" if you cannot write reproducible code. Being hard to execute and replicate is theoretically easy to fix. Force everyone to (at least hypothetically) submit their code into a testing environment where it will be automatically executed on a fresh machine. This will mean that at first they have to exactly specify all libraries that need to be installed. Second, they need to externalize all configuration - in particular data input and data output paths. Not a single value should be hard-coded in code! And finally they need a *single* command which can be run to execute the whole(!) pipeline. If they fail on any of these parts... they should try again. Work that does not pass this test is considered unfinished by the author. Basically you are introducing an automated, infallible test. Regarding your code, I'd really not try that direction. In particular even these few lines already look unclear and over-engineered. The csv format is already hard-coded into the code. If it changes to parquet you'd have to touch the code. The processing object has data paths fixed for which is no reason in a job which should take care of pure processing. Export data is also not something that a processing job should handle. And what if you have multiple input and output data? You would not have all these issues if you had kept to most simple solution to have a function `process(data1, data2, ...) -> result_data` where dataframes are passed in and out. It would also mean to have zero additional libraries or boilerplate. I highly doubt that a function `main_pipe(...)` will fix the malpractices some people may do. There are two small feature which are useful beyond a plain function though: automatically generating a visual DAG from the code and quick checking if input requirements are satisfied before heavy code is run. You can still put any mature DAG library on top, which probably already includes experience from a lot of developers. Not need to rewrite that. I'm not sure which one is best (metaflow, luigi, airflow, ... https://github.com/pditommaso/awesome-pipeline no idea), but many come with a lot of features. If you want a bit more scaffolding to easier understand foreign projects, you could look at https://github.com/quantumblacklabs/kedro but maybe that's already too much. Fix the "single command replication-from-scratch requirement" first.

BentoML

Posts with mentions or reviews of BentoML. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-04.

What are some alternatives?

When comparing Kedro and BentoML you can also consider the following projects:

Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows

fastapi - FastAPI framework, high performance, easy to learn, fast to code, ready for production

luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models

Dask - Parallel computing with task scheduling

cookiecutter-pytorch - A Cookiecutter template for PyTorch Deep Learning projects.

haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.

clearml - ClearML - Auto-Magical CI/CD to streamline your ML workflow. Experiment Manager, MLOps and Data-Management

ploomber - The fastest ⚡️ way to build data pipelines. Develop iteratively, deploy anywhere. ☁️

lightning-bolts - Toolbox of models, callbacks, and datasets for AI/ML researchers.

Pinball

cookiecutter-data-science - A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.