beam
codec-jvm
Our great sponsors
beam | codec-jvm | |
---|---|---|
30 | 1 | |
7,508 | 32 | |
1.5% | - | |
10.0 | 0.0 | |
5 days ago | about 5 years ago | |
Java | Haskell | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
beam
-
Ask HN: Does (or why does) anyone use MapReduce anymore?
The "streaming systems" book answers your question and more: https://www.oreilly.com/library/view/streaming-systems/97814.... It gives you a history of how batch processing started with MapReduce, and how attempts at scaling by moving towards streaming systems gave us all the subsequent frameworks (Spark, Beam, etc.).
As for the framework called MapReduce, it isn't used much, but its descendant https://beam.apache.org very much is. Nowadays people often use "map reduce" as a shorthand for whatever batch processing system they're building on top of.
-
beam VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
How do Streaming Aggregation Pipelines work?
Apache Beam is one of many tools that you can use
-
Releasing Temporian, a Python library for processing temporal data, built together with Google
Flexible runtime ☁️: Temporian programs can run seamlessly in-process in Python, on large datasets using Apache Beam.
-
Kafka cluster loses or duplicates messages
To perform the tests I'm using a Kafka cluster on Kubernetes from the Beam repo (here).
- Apache Beam
-
Real Time Data Infra Stack
Apache Beam: Streaming framework which can be run on several runner such as Apache Flink and GCP Dataflow
-
Google Cloud Reference
Apache Beam: Batch/streaming data processing 🔗Link
-
Composer out of resources - "INFO Task exited with return code Negsignal.SIGKILL"
What you are looking for is Dataflow. It can be a bit tricky to wrap your head around at first, but I highly suggest leaning into this technology for most of your data engineering needs. It's based on the open source Apache Beam framework that originated at Google. We use an internal version of this system at Google for virtually all of our pipeline tasks, from a few GB, to Exabyte scale systems -- it can do it all.
-
Pub/Sub parallel processing best practices
That being said, there is a learning curve in understanding how Apache Beam works. Take a look at the beam website for more information.
codec-jvm
-
Ecosystem: Haskell vs JVM (Eta, Frege)
The core of eta-lang was codec-jvm (https://github.com/rahulmutt/codec-jvm) and i think the work done there and in eta-lang itself could be put to good use.
What are some alternatives?
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
inline-java - Haskell/Java interop via inline Java code in Haskell modules.
Apache Hadoop - Apache Hadoop
Scio - A Scala API for Apache Beam and Google Cloud Dataflow.
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Apache Hive - Apache Hive
Apache Accumulo - Apache Accumulo
Apache HBase - Apache HBase
Ruby on Rails - Ruby on Rails
data-engineer-roadmap - Roadmap to becoming a data engineer in 2021