pachyderm
typhoon-orchestrator
pachyderm | typhoon-orchestrator | |
---|---|---|
8 | 14 | |
6,074 | 29 | |
0.1% | - | |
9.8 | 0.0 | |
7 days ago | over 1 year ago | |
Go | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pachyderm
-
Open Source Advent Fun Wraps Up!
20. Pachyderm | Github | tutorial
-
Exploring Open-Source Alternatives to Landing AI for Robust MLOps
Pachyderm specializes in creating compliance-focused pipelines that integrate with enterprise-level storage solutions.
-
Show HN: We scaled Git to support 1 TB repos
There are a couple of other contenders in this space. DVC (https://dvc.org/) seems most similar.
If you're interested in something you can self-host... I work on Pachyderm (https://github.com/pachyderm/pachyderm), which doesn't have a Git-like interface, but also implements data versioning. Our approach de-duplicates between files (even very small files), and our storage algorithm doesn't create objects proportional to O(n) directory nesting depth as Xet appears to. (Xet is very much like Git in that respect.)
The data versioning system enables us to run pipelines based on changes to your data; the pipelines declare what files they read, and that allows us to schedule processing jobs that only reprocess new or changed data, while still giving you a full view of what "would" have happened if all the data had been reprocessed. This, to me, is the key advantage of data versioning; you can save hundreds of thousands of dollars on compute. Being able to undo an oopsie is just icing on the cake.
Xet's system for mounting a remote repo as a filesystem is a good idea. We do that too :)
- pachyderm: Data-Centric Pipelines and Data Versioning
-
Awesome list of VCs investing in commercial open-source startups
Pachyderm - License prevents competition.
-
Airflow's Problem
I was at Airbnb when we open-sourced Airflow, it was a great solution to the problems we had at the time. It's amazing how many more use cases people have found for it since then. At the time it was pretty focused on solving our problem of orchestrating a largely static DAG of SQL jobs. It could do other stuff even then, but that was mostly what we were using it for. Airflow has become a victim of its success as it's expanded to meet every problem which could ever be considered a data workflow. The flaws and horror stories in the post and comments here definitely resonate with me. Around the time Airflow was opensource I starting working on data-centric approach to workflow management called Pachyderm[0]. By data-centric I mean that it's focused around the data itself, and its storage, versioning, orchestration and lineage. This leads to a system that feels radically different from a job focused system like Airflow. In a data-centric system your spaghetti nest of DAGs is greatly simplified as the data itself is used to describe most of the complexity. The benefit is that data is a lot simpler to reason about, it's not a living thing that needs to run in a certain way, it just exists, and because it's versioned you have strong guarantees about how it can change.
[0] https://github.com/pachyderm/pachyderm
-
One secret tip for first-time OSS contributors. Shh! 🤫 don't tell anyone else
Here is a demo run of lgtm on pachyderm
- Dud: a tool for versioning data alongside source code, written in Go
typhoon-orchestrator
- After Airflow. Where next for DE?
- New OSS Orchestrator - Where should we go next?
-
Airflow's Problem
I have my own opinion on Airflow's pain points and created Typhoon Orchestrator (https://github.com/typhoon-data-org/typhoon-orchestrator) to solve them. It doesn't have many stars yet but I've used it to create some pipelines for medium sized companies in a few days, and they've been running for over a year without issues.
In particular I transpile to Airflow code (can also deploy to Lambda) because I think it's still the most robust and well supported "runtime", I just don't think the developer experience is that good.
-
Data Engineering for very small businesses. Any experiences?
Typhoon Orchestrator This is a framework that I designed to help fix some of the pain points of Airflow so that I could build test and deploy pipelines faster. You could skip this step but if you want more info check here.
-
CSV data library to database
I am also collaborating on an open source tool called Typhoon Orchestrator (repo). It aims to make composing airflow data pipelines simple and quite quick. Putting pipeline steps together like lego.
-
Recommendations for simple ETL (Postgres to Snowflake)
The project (https://github.com/typhoon-data-org/typhoon-orchestrator) doesn't have many stars yet but I have deployed it on a medium sized hotel chain for several data sources with a similar use case to yours and it's been working for over a year with no intervention. If you decide to pursue this option I'd be willing to provide provide some support free of charge (feel free to PM me).
-
Impress your friends! Make a serverless bot that sends daily jokes to a Telegram Group
Typhoon Orchestrator is a great way to deploy ETL workflow on AWS Lambda. In this tutorial we intend to show how easy to use and versatile it is by deploying code to Lambda that gets a random joke from https://jokeapi.dev once a day and sends it to your telegram group.
-
My Thirty Years of Dodging Repetitive Work with Automation Tools
I think there's space for an open source library that can help with what you described. We originally created https://github.com/typhoon-data-org/typhoon-orchestrator to orchestrate ETL workflows, which would be a superset of the use cases you described. Our next goal is to allow deployment to AWS lambda which can be a good compromise between getting locked in with SAAS and hosting your own infrastructure.
Also check out Zappa's scheduled tasks that have a similar goal and inspired our library.
- Airflow, you complete me! Compose YAML DAGs for Airflow with auto-complete with Typhoon (Open Source).
- Use Airflow? Composable elegant YAML DAGS that transpile to Airflow. Zero risk and no migration.
What are some alternatives?
flyte - Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
JokeAPI - REST API that serves uniformly and well formatted jokes in JSON, XML, YAML or plain text format that also offers a great variety of filtering methods
trivy - Find vulnerabilities, misconfigurations, secrets, SBOM in containers, Kubernetes, code repositories, clouds and more
Mage - 🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai
dud - A lightweight CLI tool for versioning data alongside source code and building data pipelines.
astro - Astro SDK allows rapid and clean development of {Extract, Load, Transform} workflows using Python and SQL, powered by Apache Airflow. [Moved to: https://github.com/astronomer/astro-sdk]
beneath - Beneath is a serverless real-time data platform ⚡️
astro-sdk - Astro SDK allows rapid and clean development of {Extract, Load, Transform} workflows using Python and SQL, powered by Apache Airflow.
tsuru - Open source and extensible Platform as a Service (PaaS).
getting-started - This repository is a getting started guide to Singer.
kestra - Infinitely scalable, event-driven, language-agnostic orchestration and scheduling platform to manage millions of workflows declaratively in code.
jmespath.py - JMESPath is a query language for JSON.