pachyderm
repo.macintoshgarden.org-fileset | pachyderm | |
---|---|---|
2 | 8 | |
0 | 6,089 | |
- | 0.2% | |
0.0 | 9.8 | |
9 months ago | 1 day ago | |
Shell | Go | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
repo.macintoshgarden.org-fileset
- Show HN: We scaled Git to support 1 TB repos
-
SnowFS – a fast, scalable version control file storage for graphic files
Sure, how would you like to get in touch? You have a discord, right? I actually was looking at your project and was thinking of opening a simple PR and Issue. (same username)
I have some more examples git-annex repos:
This is an annex repo I made of this popular abandonware website:
https://github.com/unqueued/repo.macintoshgarden.org-fileset
https://github.com/unqueued/ratholeradio-archive
What's cool is that people can use standard pull requests to add files to the repo. And the repo itself is small, but it can represent huge filesets. Datalad has some really fascinating medical imaging data repos that are massive (https://www.datalad.org/datasets.html).
If you wanna see a really good example of a repo with versioned binary files, check this out the git annex repo of previous git-annex binary releases:
https://downloads.kitenet.net/.git/
You can just use standard git workflows to see previous revisions of a file (well, previous hashes) but it is really easy to hook into.
I actually have thought about making a special remote specifically to diff images. But yeah, git-annex does a really amazing job at bridging the bag between binary and git.
pachyderm
-
Open Source Advent Fun Wraps Up!
20. Pachyderm | Github | tutorial
-
Exploring Open-Source Alternatives to Landing AI for Robust MLOps
Pachyderm specializes in creating compliance-focused pipelines that integrate with enterprise-level storage solutions.
-
Show HN: We scaled Git to support 1 TB repos
There are a couple of other contenders in this space. DVC (https://dvc.org/) seems most similar.
If you're interested in something you can self-host... I work on Pachyderm (https://github.com/pachyderm/pachyderm), which doesn't have a Git-like interface, but also implements data versioning. Our approach de-duplicates between files (even very small files), and our storage algorithm doesn't create objects proportional to O(n) directory nesting depth as Xet appears to. (Xet is very much like Git in that respect.)
The data versioning system enables us to run pipelines based on changes to your data; the pipelines declare what files they read, and that allows us to schedule processing jobs that only reprocess new or changed data, while still giving you a full view of what "would" have happened if all the data had been reprocessed. This, to me, is the key advantage of data versioning; you can save hundreds of thousands of dollars on compute. Being able to undo an oopsie is just icing on the cake.
Xet's system for mounting a remote repo as a filesystem is a good idea. We do that too :)
- pachyderm: Data-Centric Pipelines and Data Versioning
-
Awesome list of VCs investing in commercial open-source startups
Pachyderm - License prevents competition.
-
Airflow's Problem
I was at Airbnb when we open-sourced Airflow, it was a great solution to the problems we had at the time. It's amazing how many more use cases people have found for it since then. At the time it was pretty focused on solving our problem of orchestrating a largely static DAG of SQL jobs. It could do other stuff even then, but that was mostly what we were using it for. Airflow has become a victim of its success as it's expanded to meet every problem which could ever be considered a data workflow. The flaws and horror stories in the post and comments here definitely resonate with me. Around the time Airflow was opensource I starting working on data-centric approach to workflow management called Pachyderm[0]. By data-centric I mean that it's focused around the data itself, and its storage, versioning, orchestration and lineage. This leads to a system that feels radically different from a job focused system like Airflow. In a data-centric system your spaghetti nest of DAGs is greatly simplified as the data itself is used to describe most of the complexity. The benefit is that data is a lot simpler to reason about, it's not a living thing that needs to run in a certain way, it just exists, and because it's versioned you have strong guarantees about how it can change.
[0] https://github.com/pachyderm/pachyderm
-
One secret tip for first-time OSS contributors. Shh! 🤫 don't tell anyone else
Here is a demo run of lgtm on pachyderm
- Dud: a tool for versioning data alongside source code, written in Go
What are some alternatives?
dvc - 🦉 ML Experiments and Data Management with Git
flyte - Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
snowfs - SnowFS - a fast, scalable version control file storage for graphic files :art:
trivy - Find vulnerabilities, misconfigurations, secrets, SBOM in containers, Kubernetes, code repositories, clouds and more
sso-wall-of-shame - A list of vendors that treat single sign-on as a luxury feature, not a core security requirement.
dud - A lightweight CLI tool for versioning data alongside source code and building data pipelines.
beneath - Beneath is a serverless real-time data platform ⚡️
typhoon-orchestrator - Create elegant data pipelines and deploy to AWS Lambda or Airflow
tsuru - Open source and extensible Platform as a Service (PaaS).
kestra - Infinitely scalable, event-driven, language-agnostic orchestration and scheduling platform to manage millions of workflows declaratively in code.
rexray - REX-Ray is a container storage orchestration engine enabling persistence for cloud native workloads
nebula - A distributed block-based data storage and compute engine