seatunnel
kestra
seatunnel | kestra | |
---|---|---|
31 | 32 | |
7,388 | 6,428 | |
1.0% | 7.4% | |
9.8 | 9.9 | |
about 16 hours ago | about 11 hours ago | |
Java | Java | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
seatunnel
- SeaTunnel – super high-performance, distributed data integration tool
- Apache SeaTunnel: Next-generation high-performance, distributed integration tool
- FLaNK Weekly 31 December 2023
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
SymmetricDS: Open-Source, cross platform database replication software
looks that way. there is an other project that does similar things Apache SeaTunnel: https://seatunnel.apache.org/
- Breakthrough in the book search field! Use Apache SeaTunnel to improve the efficiency of book title similarity search
-
Questions Regarding design DW
https://seatunnel.apache.org/ Might be an overkill though...
-
SeaTunnel Zeta engine, the first choice for massive data synchronization, is officially released!
See the specific Change log: https://github.com/apache/incubator-seatunnel/releases/tag/2.3.0
-
The Ultimate Beginner’s Guide to Open Source Contribution
Apache SeaTunnel (Incubating) SeaTunnel is a very easy-to-use ultra-high-performance distributed data integration platform that supports real-time synchronization of massive data. It can synchronize tens of billions of data stably and efficiently every day, and has been used in the production of nearly 100 companies. Official website https://seatunnel.apache.org/ GitHub projects https://github.com/apache/incubator-seatunnel
- Major Release! SeaTunnel 2.3.0-beta supports the self-innovate SeaTunnel Engine and more connectors!
kestra
-
A High-Performance, Java-Based Orchestration Platform
Kestra's communication is asynchronous and based on a queuing mechanism. It leverages the Micronaut framework and offers two runners: one that uses a database (JDBC) for both the message queue and resource storage, and another that uses Kafka as the message queue and Elasticsearch as the resource storage. The platform is fully extensible and plugin-based, providing a rich set of plugins for various workflow tasks, triggers, and data storage options. For those interested, the GitHub repository is available here: https://github.com/kestra-io/kestra
- Kestra is an open-source data orchestration platform for complex workflows
- YAML-based data orchestrator
- Kestra
-
Introduction to Kestra, the open source data orchestration and scheduling platform
For everyone wondering https://github.com/kestra-io/kestra/discussions/468
-
Snowflake data pipeline with Kestra
If you need any guidance with your Snowflake deployment, our experts at Kestra would love to hear from you. Let us know if you would like us to add more plugins to the list. Or start building your custom Kestra plugin today and send it our way. We always welcome contributions!
-
Airflow's Problem
But I totally agree that a large static dag is not appropriate in the actual data world with data mesh and domain responsibility.
[0] https://github.com/kestra-io/kestra
-
Ask HN: Open-source with Kafka as dependencies, is this a instant turn off?
- We have plans to add another option that will replace both dependencies with jdbc (https://github.com/kestra-io/kestra/pull/368), is theses dependencies more comfortable for you?
-
ELT vs ETL: Why not both?
With Kestra's innate flexibility, and many integrations, you are not locked into the choice of one ingestion method or the other. Complex workflows can be developed, whether in parallel or sequentially, to deliver both ELT and ETL processes. Simple descriptive yaml is used to connect plugins, and to create flows. Because workflows created in Kestra are represented visually, and issues can be seen in relation to individual tasks, there is no need to fear complexity. Trouble can be traced to its source in an instant, allowing you to try new things and come up with a new solution without fear. Give it a try, and let us know what you come up with!
-
Debezium Change Data Capture without Kafka Connect
Kestra is an orchestration and scheduling platform that is designed to simplify the building, running, scheduling, and monitoring of complex data pipelines. Data pipelines can be built in real-time, no matter how complex the workflow, and can connect to multiple resources as needed (including Debezium).
What are some alternatives?
airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
conductor - Conductor is a microservices orchestration engine.
Leetcode - Solutions to LeetCode problems; updated daily. Subscribe to my YouTube channel for more.
zeebe - Distributed Workflow Engine for Microservices Orchestration
hudi - Upserts, Deletes And Incremental Processing on Big Data.
kogito-runtimes - This repository is a fork of apache/incubator-kie-kogito-runtimes. Please use upstream repository for development.
com.openai.unity - A Non-Official OpenAI Rest Client for Unity (UPM)
debezium - Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.
Apache Hive - Apache Hive
akhq - Kafka GUI for Apache Kafka to manage topics, topics data, consumers group, schema registry, connect and more...
apisix-helm-chart - Apache APISIX Helm Chart
flyte - Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.