seatunnel
Apache Spark
seatunnel | Apache Spark | |
---|---|---|
31 | 101 | |
7,388 | 38,378 | |
1.0% | 0.6% | |
9.8 | 10.0 | |
about 11 hours ago | 5 days ago | |
Java | Scala | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
seatunnel
- SeaTunnel – super high-performance, distributed data integration tool
- Apache SeaTunnel: Next-generation high-performance, distributed integration tool
- FLaNK Weekly 31 December 2023
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
SymmetricDS: Open-Source, cross platform database replication software
looks that way. there is an other project that does similar things Apache SeaTunnel: https://seatunnel.apache.org/
- Breakthrough in the book search field! Use Apache SeaTunnel to improve the efficiency of book title similarity search
-
Questions Regarding design DW
https://seatunnel.apache.org/ Might be an overkill though...
-
SeaTunnel Zeta engine, the first choice for massive data synchronization, is officially released!
See the specific Change log: https://github.com/apache/incubator-seatunnel/releases/tag/2.3.0
-
The Ultimate Beginner’s Guide to Open Source Contribution
Apache SeaTunnel (Incubating) SeaTunnel is a very easy-to-use ultra-high-performance distributed data integration platform that supports real-time synchronization of massive data. It can synchronize tens of billions of data stably and efficiently every day, and has been used in the production of nearly 100 companies. Official website https://seatunnel.apache.org/ GitHub projects https://github.com/apache/incubator-seatunnel
- Major Release! SeaTunnel 2.3.0-beta supports the self-innovate SeaTunnel Engine and more connectors!
Apache Spark
- "xAI will open source Grok"
-
Groovy 🎷 Cheat Sheet - 01 Say "Hello" from Groovy
Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post 😉.
-
🦿🛴Smarcity garbage reporting automation w/ ollama
Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system
-
Go concurrency simplified. Part 4: Post office as a data pipeline
also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc.
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
Apache Spark VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Integrate Pyspark Structured Streaming with confluent-kafka
Apache Spark - https://spark.apache.org/
-
Spark – A micro framework for creating web applications in Kotlin and Java
A JVM based framework named "Spark", when https://spark.apache.org exists?
- Rest in Peas: The Unrecognized Death of Speech Recognition (2010)
-
PySpark SparkSession Builder with Kubernetes Master
I recently saw a pull request that was merged to the Apache/Spark repository that apparently adds initial Python bindings for PySpark on K8s. I posted a comment to the PR asking a question about how to use spark-on-k8s in a Python Jupyter notebook, and was told to ask my question here.
What are some alternatives?
airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
kestra - Infinitely scalable, event-driven, language-agnostic orchestration and scheduling platform to manage millions of workflows declaratively in code.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
Leetcode - Solutions to LeetCode problems; updated daily. Subscribe to my YouTube channel for more.
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
hudi - Upserts, Deletes And Incremental Processing on Big Data.
Scalding - A Scala API for Cascading
com.openai.unity - A Non-Official OpenAI Rest Client for Unity (UPM)
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
Apache Hive - Apache Hive
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.