cgpipe
redun
cgpipe | redun | |
---|---|---|
1 | 4 | |
3 | 489 | |
- | 1.6% | |
5.2 | 7.5 | |
4 months ago | 3 months ago | |
Java | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cgpipe
-
Nextflow: Data-Driven Computational Pipelines
I do too.. and have similar opinions. I wrote my own tool years back for pipelines because it was always frustrating (started roughly around the same time as Nextflow).
Allowing for files to be marked as transient (temp) and re-running from arbitrary time points are definitely one of the things I support... as is conditional logic within the pipeline for job definition and resource usage. For me though, one of the biggest things is that I like having composable pipelines, so each part of the larger workflow can be developed independently. They can interact with each other (DAG) and use existing dependencies, but they don't have to exist in the same document/script. I work on large WGS datasets, so 1000's of jobs per patient isn't uncommon.
Happy to talk more if you're interested.
https://github.com/compgen-io/cgpipe
(And yes, you can dry run the entire thing. It will write out a bash script if you want to see exactly what is going to run without submitting jobs.)
redun
- Redun: Yet another redundant workflow engine
-
Nextflow: Data-Driven Computational Pipelines
I'm personally a huge fan of redun¹ for running computational pipelines. It's pure python, it's easy to learn/debug, it has automatic caching, retry, provenance logging, and a great integration with AWS Batch for running large jobs. I've been really impressed with how easy it is to run a job to completion that fans out to thousands of AWS spot instances at once.
I've used nextflow in the past, and I've found it to be much harder to use. Learning another DSL is annoying, documentation was sparse, I constantly ran into bugs, and it was hard to debug in general. I don't know how much it's changed over the past 3 years though.
¹https://github.com/insitro/redun
- Insitro's redun: Yet another redundant workflow engine
- Insitro's new open source software uses DAGs.
What are some alternatives?
nextflow - A DSL for data-driven computational pipelines
Prefect - The easiest way to build, run, and monitor data pipelines at scale.
common-workflow-
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
huey - a little task queue for python
common-workflow-language - Repository for the CWL standards. Use https://cwl.discourse.group/ for support 😊
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.