make-booster
Airflow
make-booster | Airflow | |
---|---|---|
3 | 170 | |
8 | 34,705 | |
- | 1.7% | |
10.0 | 10.0 | |
almost 2 years ago | 5 days ago | |
Makefile | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
make-booster
-
Snakemake – A framework for reproducible data analysis
For a very different approach, check out make-booster:
https://github.com/david-a-wheeler/make-booster
Make-booster provides utility routines intended to greatly simplify data processing (particularly a data pipeline) using GNU make. It includes some mechanisms specifically to help Python, as well as general-purpose mechanisms that can be useful in any system. In particular, it helps reliably reproduce results, and it automatically determines what needs to run and runs only that (producing a significant speedup in most cases). Released as open source software.
-
A Love Letter to Make
https://github.com/david-a-wheeler/make-booster
I think a lot of hate on make is due to poor use. If your makefile is complex, refactor it. Auto-generate dependencies (it only takes a few lines in GNU make). And don't use recursive make, that way lies madness. I also think GNU make is the wiser tool; POSIX make lacks too much in many cases.
-
The Unreasonable Effectiveness of Makefiles
https://github.com/david-a-wheeler/make-booster
From its readme:
"This project (contained in this directory and below) provides utility routines intended to greatly simplify data processing (particularly a data pipeline) using GNU make. It includes some mechanisms specifically to help Python, as well as general-purpose mechanisms that can be useful in any system. In particular, it helps reliably reproduce results, and it automatically determines what needs to run and runs only that (producing a significant speedup in most cases)."
"For example, imagine that Python file BBB.py says include CC, and file CC.py reads from file F.txt (and CC.py declares its INPUTS= as described below). Now if you modify file F.txt or CC.py, any rule that runs BBB.py will automatically be re-run in the correct order when you use make, even if you didn't directly edit BBB.py."
This is NOT functionality directly provided by Python, and the overhead with >1000 files was 0.07seconds which we could live with :-).
Airflow
-
Building in Public: Leveraging Tublian's AI Copilot for My Open Source Contributions
Contributing to Apache Airflow's open-source project immersed me in collaborative coding. Experienced maintainers rigorously reviewed my contributions, providing constructive feedback. This ongoing dialogue refined the codebase and honed my understanding of best practices.
-
Navigating Week Two: Insights and Experiences from My Tublian Internship Journey
In week Two, I contributed to the Apache Airflow repository.
-
Airflow VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Best ETL Tools And Why To Choose
Apache Airflow is an open-source platform to programmatically author, schedule, and monitor workflows. The platform features a web-based user interface and a command-line interface for managing and triggering workflows.
-
Simplifying Data Transformation in Redshift: An Approach with DBT and Airflow
Airflow is the most widely used and well-known tool for orchestrating data workflows. It allows for efficient pipeline construction, scheduling, and monitoring.
-
Share Your favorite python related software!
AIRFLOW This is more of a library in my opinion, but Airflow has become an essential tool for scheduling in my work. All our ML training pipelines are ordered and scheduled with Airflow and it works seamlessly. The dashboard provided is also fantastic!
-
Ask HN: What is the correct way to deal with pipelines?
I agree there are many options in this space. Two others to consider:
- https://airflow.apache.org/
- https://github.com/spotify/luigi
There are also many Kubernetes based options out there. For the specific use case you specified, you might even consider a plain old Makefile and incrond if you expect these all to run on a single host and be triggered by a new file showing up in a directory…
- "Você veio protestar para ter acesso ao código fonte da urnas. O que é o código fonte?" "Não sei" 🤡
- Cómo construir tu propia data platform. From zero to hero.
-
Is it impossible to contribute to open source as a data engineer?
You can try and contribute some new connectors/operators for workflow managers like Airflow or Airbyte
What are some alternatives?
tclmake - Partial make clone in pure Tcl
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
checkexec - CLI tool to conditionally execute commands only when files in a dependency list have been updated. Like `make`, but standalone.
dagster - An orchestration platform for the development, production, and observation of data assets.
snakemake-wrappers - This is the development home of the Snakemake wrapper repository, see
n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.
mandala - A powerful and easy to use Python framework for experiment tracking and incremental computing
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
dagger - Application Delivery as Code that Runs Anywhere
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
just - 🤖 Just a command runner
Dask - Parallel computing with task scheduling