flink-kubernetes-operator
Apache Spark
flink-kubernetes-operator | Apache Spark | |
---|---|---|
8 | 101 | |
725 | 38,469 | |
4.3% | 0.8% | |
9.2 | 10.0 | |
8 days ago | 6 days ago | |
Java | Scala | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
flink-kubernetes-operator
-
Top 10 Common Data Engineers and Scientists Pain Points in 2024
Data scientists often prefer Python for its simplicity and powerful libraries like Pandas or SciPy. However, many real-time data processing tools are Java-based. Take the example of Kafka, Flink, or Spark streaming. While these tools have their Python API/wrapper libraries, they introduce increased latency, and data scientists need to manage dependencies for both Python and JVM environments. For example, implementing a real-time anomaly detection model in Kafka Streams would require translating Python code into Java, slowing down pipeline performance, and requiring a complex initial setup.
-
Choosing Between a Streaming Database and a Stream Processing Framework in Python
Other stream processing engines (such as Flink and Spark Streaming) provide SQL interfaces too, but the key difference is a streaming database has its storage. Stream processing engines require a dedicated database to store input and output data. On the other hand, streaming databases utilize cloud-native storage to maintain materialized views and states, allowing data replication and independent storage scaling.
- FLaNK Stack Weekly 22 January 2024
-
Go concurrency simplified. Part 4: Post office as a data pipeline
also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc.
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
Getting Started with Flink SQL, Apache Iceberg and DynamoDB Catalog
Due to the technology transformation we want to do recently, we started to investigate Apache Iceberg. In addition, the data processing engine we use in house is Apache Flink, so it's only fair to look for an experimental environment that integrates Flink and Iceberg.
- FLaNK Stack Weekly for 07August2023
Apache Spark
- "xAI will open source Grok"
-
Groovy 🎷 Cheat Sheet - 01 Say "Hello" from Groovy
Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post 😉.
-
🦿🛴Smarcity garbage reporting automation w/ ollama
Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system
-
Go concurrency simplified. Part 4: Post office as a data pipeline
also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc.
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
Apache Spark VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Integrate Pyspark Structured Streaming with confluent-kafka
Apache Spark - https://spark.apache.org/
-
Spark – A micro framework for creating web applications in Kotlin and Java
A JVM based framework named "Spark", when https://spark.apache.org exists?
- Rest in Peas: The Unrecognized Death of Speech Recognition (2010)
-
PySpark SparkSession Builder with Kubernetes Master
I recently saw a pull request that was merged to the Apache/Spark repository that apparently adds initial Python bindings for PySpark on K8s. I posted a comment to the PR asking a question about how to use spark-on-k8s in a Python Jupyter notebook, and was told to ask my question here.
What are some alternatives?
hugging-chat-api - HuggingChat Python API🤗
Trino - Official repository of Trino, the distributed SQL query engine for big data, former
anomalib - An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
ToolBench - [ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
CallCMLModel - An example on calling models deployed in CML
Scalding - A Scala API for Cascading
Qwen-7B - The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud. [Moved to: https://github.com/QwenLM/Qwen]
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
cdf-workshop
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.