Kedro
projects
Our great sponsors
Kedro | projects | |
---|---|---|
29 | 19 | |
9,341 | 77 | |
1.3% | - | |
9.7 | 4.7 | |
5 days ago | 3 months ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Kedro
-
Nextflow: Data-Driven Computational Pipelines
Interesting, thanks for sharing. I'll definitely take a look, although at this point I am so comfortable with Snakemake, it is a bit hard to imagine what would convince me to move to another tool. But I like the idea of composable pipelines: I am building a tool (too early to share) that would allow to lay Snakemake pipelines on top of each other using semi-automatic data annotations similar to how it is done in kedro (https://github.com/kedro-org/kedro).
-
A Polars exploration into Kedro
# pyproject.toml [project] dependencies = [ "kedro @ git+https://github.com/kedro-org/kedro@3ea7231", "kedro-datasets[pandas.CSVDataSet,polars.CSVDataSet] @ git+https://github.com/kedro-org/kedro-plugins@3b42fae#subdirectory=kedro-datasets", ]
-
What are some open-source ML pipeline managers that are easy to use?
So there's 2 sides to pipeline management: the actual definition of the pipelines (in code) and how/when/where you run them. Some tools like prefect or airflow do both of them at once, but for the actual pipeline definition I'm a fan of https://kedro.org. You can then use most available orchestrators to run those pipelines on whatever schedule and architecture you want.
-
How do data scientists combine Kedro and Databricks?
We have set up a milestone on GitHub so you can check in on our progress and contribute if you want to. To suggest features to us, report bugs, or just see what we're working on right now, visit the Kedro projects on GitHub.
-
How do you organize yourself during projects?
you could use a project framework like kedro to force you to be more disciplined about how you structure your projects. I'd also recommend checking out this book: Edna Ridge - Guerrilla Analytics: A Practical Approach to Working with Data
-
Futuristic documentation systems in Python, part 1: aiming for more
Recently I started a position as Developer Advocate for Kedro, an opinionated data science framework, and one of the things we're doing is exploring what are the best open source tools we can use to create our documentation.
-
Python projects with best practices on Github?
You can also check out Kedro, it’s like the Flask for data science projects and helps apply clean code principles to data science code.
- Data Science/ Analyst Zertifikate für den Job Markt?
- What are examples of well-organized data science project that I can see on Github?
-
Dabbling with Dagster vs. Airflow
An often overlooked framework used by NASA among others is Kedro https://github.com/kedro-org/kedro. Kedro is probably the simplest set of abstractions for building pipelines but it doesn't attempt to kill Airflow. It even has an Airflow plugin that allows it to be used as a DSL for building Airflow pipelines or plug into whichever production orchestration system is needed.
projects
-
Analyze and plot 5.5M records in 20s with BigQuery and Ploomber
You can look at the files in detail here. For this tutorial, I'll quickly mention a few crucial details.
-
Three Tools for Executing Jupyter Notebooks
Ploomber is the complete solution for notebook execution. It builds on top of papermill and extends it to allow writing multi-stage workflows where each task is a notebook. Meanwhile, it automatically manages orchestration. Hence you can run notebooks in parallel without having to write extra code.
-
OOP in python ETL?
The answer is YES, you can take advantage of OOP best practices to write good ETLs. For instance in this Ploomber sample ETL You can see there's a mix of .sql and .py files, it's within modular components so it's easier to test, deploy and execute. It's way easier than airflow since there's no infra work involved, you only have to setup your pipeline.yaml file. This also allows you to make the code WAY more maintainable and scalable, avoid redundant code and deploy faster :)
-
What are some good DS/ML repos where I can learn about structuring a DS/ML project?
We have tons of examples that follow a standard layout, here’s one: https://github.com/ploomber/projects/tree/master/templates/ml-intermediate
-
Anyone's org using Airflow as a generalized job orchestator, not just for data engineering/ETL?
I can talk about the open-source I'm working on Ploomber (https://github.com/ploomber/ploomber), it's focusing on seamless integration with Jupyter and IDEs. It allows an easy mechanism to orchestrate work for instance, here's an example SQL ETL and then you can deploy it anywhere, so if you're working with Airflow, it'll deploy it there too but without the complexity. You wouldn't have to maintain docker images etc.
-
ETL with python
I recommend using Ploomber which can help you build once and automate a lot of the work, and it works with python natively. It's open source so you can start with one of the examples, like the ML-basic example or the ETL one. It'll allow you to define the pipeline and then easily explain the flow with the DAG plot. Feel free to ask questions, I'm happy to help (I've built 100s of data pipelines over the years).
-
What tools do you use for data quality?
I'm not sure what pipeline frameworks support this kind of testing, but after successfully implementing this workflow, I added this feature to Ploomber, the project I'm working on. Here's how a pipeline looks like, and here's a tutorial.
-
Data pipeline suggestions
Check out Ploomber, (disclaimer: I'm the author) it has a simple API, and you can export to Airflow, AWS, Kubernetes. Supports all databases that work with Python and you can seamlessly transfer from a SQL step to a Python step. Here's an example.
-
ETL Tools
Without more specifics about your use case, it's hard to give more specific advice. But check out Ploomber (disclaimer: I'm the creator) - here's an example ETL pipeline. I've used it in past projects to develop Oracle ETL pipelines. Modularizing the analysis in many parts helps a lot with maintenance.
-
Whats something hot rn or whats going to be next thing we should focus on in data engineering?
Yes! (tell your friend). You can write shell scripts so you can execute that 2002 code :) You can test it locally and then run it in AWS Batch/Argo. Here's an example
What are some alternatives?
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
cookiecutter-data-science - A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
ploomber - The fastest ⚡️ way to build data pipelines. Develop iteratively, deploy anywhere. ☁️
Dask - Parallel computing with task scheduling
jitsu - Jitsu is an open-source Segment alternative. Fully-scriptable data ingestion engine for modern data teams. Set-up a real-time data pipeline in minutes, not days
cookiecutter-pytorch - A Cookiecutter template for PyTorch Deep Learning projects.
dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
Python Packages Project Generator - 🚀 Your next Python package needs a bleeding-edge project structure.
BentoML - The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!
clearml - ClearML - Auto-Magical CI/CD to streamline your AI workload. Experiment Management, Data Management, Pipeline, Orchestration, Scheduling & Serving in one MLOps/LLMOps solution