common-workflow-
cgpipe
common-workflow- | cgpipe | |
---|---|---|
1 | 1 | |
- | 3 | |
- | - | |
- | 5.2 | |
- | 4 months ago | |
Java | ||
- | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
common-workflow-
-
Nextflow: Data-Driven Computational Pipelines
https://www.commonwl.org/
https://github.com/common-workflow-language/common-workflow-...
cgpipe
-
Nextflow: Data-Driven Computational Pipelines
I do too.. and have similar opinions. I wrote my own tool years back for pipelines because it was always frustrating (started roughly around the same time as Nextflow).
Allowing for files to be marked as transient (temp) and re-running from arbitrary time points are definitely one of the things I support... as is conditional logic within the pipeline for job definition and resource usage. For me though, one of the biggest things is that I like having composable pipelines, so each part of the larger workflow can be developed independently. They can interact with each other (DAG) and use existing dependencies, but they don't have to exist in the same document/script. I work on large WGS datasets, so 1000's of jobs per patient isn't uncommon.
Happy to talk more if you're interested.
https://github.com/compgen-io/cgpipe
(And yes, you can dry run the entire thing. It will write out a bash script if you want to see exactly what is going to run without submitting jobs.)
What are some alternatives?
redun - Yet another redundant workflow engine
nextflow - A DSL for data-driven computational pipelines
common-workflow-language - Repository for the CWL standards. Use https://cwl.discourse.group/ for support 😊
huey - a little task queue for python
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.