kestra
orchest
Our great sponsors
kestra | orchest | |
---|---|---|
32 | 44 | |
6,340 | 4,020 | |
14.7% | 0.2% | |
9.9 | 4.5 | |
5 days ago | 11 months ago | |
Java | TypeScript | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kestra
-
A High-Performance, Java-Based Orchestration Platform
Kestra's communication is asynchronous and based on a queuing mechanism. It leverages the Micronaut framework and offers two runners: one that uses a database (JDBC) for both the message queue and resource storage, and another that uses Kafka as the message queue and Elasticsearch as the resource storage. The platform is fully extensible and plugin-based, providing a rich set of plugins for various workflow tasks, triggers, and data storage options. For those interested, the GitHub repository is available here: https://github.com/kestra-io/kestra
- Kestra is an open-source data orchestration platform for complex workflows
- YAML-based data orchestrator
- Kestra
-
Introduction to Kestra, the open source data orchestration and scheduling platform
For everyone wondering https://github.com/kestra-io/kestra/discussions/468
-
Snowflake data pipeline with Kestra
If you need any guidance with your Snowflake deployment, our experts at Kestra would love to hear from you. Let us know if you would like us to add more plugins to the list. Or start building your custom Kestra plugin today and send it our way. We always welcome contributions!
-
Airflow's Problem
But I totally agree that a large static dag is not appropriate in the actual data world with data mesh and domain responsibility.
[0] https://github.com/kestra-io/kestra
-
Ask HN: Open-source with Kafka as dependencies, is this a instant turn off?
- We have plans to add another option that will replace both dependencies with jdbc (https://github.com/kestra-io/kestra/pull/368), is theses dependencies more comfortable for you?
-
ELT vs ETL: Why not both?
With Kestra's innate flexibility, and many integrations, you are not locked into the choice of one ingestion method or the other. Complex workflows can be developed, whether in parallel or sequentially, to deliver both ELT and ETL processes. Simple descriptive yaml is used to connect plugins, and to create flows. Because workflows created in Kestra are represented visually, and issues can be seen in relation to individual tasks, there is no need to fear complexity. Trouble can be traced to its source in an instant, allowing you to try new things and come up with a new solution without fear. Give it a try, and let us know what you come up with!
-
Debezium Change Data Capture without Kafka Connect
Kestra is an orchestration and scheduling platform that is designed to simplify the building, running, scheduling, and monitoring of complex data pipelines. Data pipelines can be built in real-time, no matter how complex the workflow, and can connect to multiple resources as needed (including Debezium).
orchest
-
Decent low code options for orchestration and building data flows?
You can check out our OSS https://github.com/orchest/orchest
- Build ML workflows with Jupyter notebooks
-
Building container images in Kubernetes, how would you approach it?
The code example is part of our ELT/data pipeline tool called Orchest: https://github.com/orchest/orchest/
-
Launch HN: Patterns (YC S21) – A much faster way to build and deploy data apps
First want to say congrats to the Patterns team for creating a gorgeous looking tool. Very minimal and approachable. Massive kudos!
Disclaimer: we're building something very similar and I'm curious about a couple of things.
One of the questions our users have asked us often is how to minimize the dependence on "product specific" components/nodes/steps. For example, if you write CI for GitHub Actions you may use a bunch of GitHub Action references.
Looking at the `graph.yml` in some of the examples you shared you use a similar approach (e.g. patterns/openai-completion@v4). That means that whenever you depend on such components your automation/data pipeline becomes more tied to the specific tool (GitHub Actions/Patterns), effectively locking in users.
How are you helping users feel comfortable with that problem (I don't want to invest in something that's not portable)? It's something we've struggled with ourselves as we're expanding the "out of the box" capabilities you get.
Furthermore, would have loved to see this as an open source project. But I guess the second best thing to open source is some open source contributions and `dcp` and `common-model` look quite interesting!
For those who are curious, I'm one of the authors of https://github.com/orchest/orchest
-
Argo became a graduated CNCF project
Haven't tried it. In its favor, Argo is vendor neutral and is really easy to set up in a local k8s environment like docker for desktop or minikube. If you already use k8s for configuration, service discovery, secret management, etc, it's dead simple to set up and use (avoiding configuration having to learn a whole new workflow configuration language in addition to k8s). The big downside is that it doesn't have a visual DAG editor (although that might be a positive for engineers having to fix workflows written by non-programmers), but the relatively bare-metal nature of Argo means that it's fairly easy to use it as an underlying engine for a more opinionated or lower-code framework (orchest is a notable one out now).
- Ideas for infrastructure and tooling to use for frequent model retraining?
-
Looking for a mentor in MLOps. I am a lead developer.
If you’d like to try something for you data workflows that’s vendor agnostic (k8s based) and open source you can check out our project: https://github.com/orchest/orchest
-
Is there a good way to trigger data pipelines by event instead of cron?
You can find it here: https://github.com/orchest/orchest Convenience install script: https://github.com/orchest/orchest#installation
-
How do you deal with parallelising parts of an ML pipeline especially on Python?
We automatically provide container level parallelism in Orchest: https://github.com/orchest/orchest
-
Launch HN: Sematic (YC S22) – Open-source framework to build ML pipelines faster
For people in this thread interested in what this tool is an alternative to: Airflow, Luigi, Kubeflow, Kedro, Flyte, Metaflow, Sagemaker Pipelines, GCP Vertex Workbench, Azure Data Factory, Azure ML, Dagster, DVC, ClearML, Prefect, Pachyderm, and Orchest.
Disclaimer: author of Orchest https://github.com/orchest/orchest
What are some alternatives?
conductor - Conductor is a microservices orchestration engine.
docker-airflow - Docker Apache Airflow
zeebe - Distributed Workflow Engine for Microservices Orchestration
hookdeck-cli - Manage your Hookdeck workspaces, connections, transformations, filters, and more with the Hookdeck CLI
kogito-runtimes - This repository is a fork of apache/incubator-kie-kogito-runtimes. Please use upstream repository for development.
ploomber - The fastest ⚡️ way to build data pipelines. Develop iteratively, deploy anywhere. ☁️
debezium - Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.
n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.
akhq - Kafka GUI for Apache Kafka to manage topics, topics data, consumers group, schema registry, connect and more...
label-studio - Label Studio is a multi-type data labeling and annotation tool with standardized output format
flyte - Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
Node RED - Low-code programming for event-driven applications