reflow
dvc
Our great sponsors
reflow | dvc | |
---|---|---|
7 | 108 | |
952 | 13,093 | |
-0.1% | 1.3% | |
6.2 | 9.7 | |
6 months ago | 4 days ago | |
Go | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
reflow
- reflow - A language and runtime for distributed, incremental data processing in the cloud
- Reflow, a language for distributed, incremental data processing in the cloud
-
Jolie, the service-oriented programming language
Reflow [1] is a similar attempt at a slightly different domain: bioinformatics and ETL pipelines. Reflow exposes a data model and programming model that reclaims programmability in these systems, and, by leaning on these abstractions, gives the runtime much more leeway to do interesting things. It unties the hands of the implementer.
-
Data as a build system ?
https://github.com/grailbio/reflow is the closest that I know, as it has a design that resembles the Bazel build system.
-
Why isn't differential dataflow more popular?
It seems Reflow falls in this category:
https://github.com/grailbio/reflow
> Reflow thus allows scientists and engineers to write straightforward programs and then have them transparently executed in a cloud environment. Programs are automatically parallelized and distributed across multiple machines, and redundant computations (even across runs and users) are eliminated by its memoization cache. Reflow evaluates its programs incrementally: whenever the input data or program changes, only those outputs that depend on the changed data or code are recomputed.
dvc
-
Why bad scientific code beats code following "best practices"
What you’re describing sounds like DVC (at a higher-ish—80%-solution level).
See pachyderm too.
-
First 15 Open Source Advent projects
10. DVC by Iterative | Github | tutorial
-
Exploring Open-Source Alternatives to Landing AI for Robust MLOps
Platforms such as MLflow monitor the development stages of machine learning models. In parallel, Data Version Control (DVC) brings version control system-like functions to the realm of data sets and models.
- ML Experiments Management with Git
-
Git Version Controlled Datasets in S3
I was using DVC (https://dvc.org/) for some time to help solve this but it was getting hard to manage the storage connections and I would run into cache issues a lot, but this solves it using git-lfs itself.
- Ask HN: How do your ML teams version datasets and models?
-
Exploring MLOps Tools and Frameworks: Enhancing Machine Learning Operations
DVC (Data Version Control):
- Evaluate and Track Your LLM Experiments: Introducing TruLens for LLMs
-
[D] Is there a tool to keep track of my ML experiments?
I have been using DVC and MLflow since then DVC had only data tracking and MLflow only model tracking. I can say both are awesome now and maybe the only factor I would like to mention is that IMO, MLflow is a bit harder to learn while DVC is just a git practically.
-
Where do I best store my test data when using github for code?
I use DVC, which works decently well and can be hooked into Git.
What are some alternatives?
differential-dataflow - An implementation of differential dataflow using timely dataflow on Rust.
MLflow - Open source platform for the machine learning lifecycle
rslint - A (WIP) Extremely fast JavaScript and TypeScript linter and Rust crate
lakeFS - lakeFS - Data version control for your data lake | Git for data
ballista - Distributed compute platform implemented in Rust, and powered by Apache Arrow.
Activeloop Hub - Data Lake for Deep Learning. Build, manage, query, version, & visualize datasets. Stream data real-time to PyTorch/TensorFlow. https://activeloop.ai [Moved to: https://github.com/activeloopai/deeplake]
ploomber - The fastest ⚡️ way to build data pipelines. Develop iteratively, deploy anywhere. ☁️
delta - An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs
timely-dataflow - A modular implementation of timely dataflow in Rust
odict - A blazingly-fast, offline-first format and toolchain for lexical data 📖
aim - Aim 💫 — An easy-to-use & supercharged open-source experiment tracker.