beam
data-engineer-roadmap
Our great sponsors
beam | data-engineer-roadmap | |
---|---|---|
30 | 68 | |
7,477 | 11,789 | |
1.0% | 0.0% | |
10.0 | 0.0 | |
5 days ago | about 2 years ago | |
Java | ||
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
beam
-
Ask HN: Does (or why does) anyone use MapReduce anymore?
The "streaming systems" book answers your question and more: https://www.oreilly.com/library/view/streaming-systems/97814.... It gives you a history of how batch processing started with MapReduce, and how attempts at scaling by moving towards streaming systems gave us all the subsequent frameworks (Spark, Beam, etc.).
As for the framework called MapReduce, it isn't used much, but its descendant https://beam.apache.org very much is. Nowadays people often use "map reduce" as a shorthand for whatever batch processing system they're building on top of.
-
beam VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
How do Streaming Aggregation Pipelines work?
Apache Beam is one of many tools that you can use
-
Releasing Temporian, a Python library for processing temporal data, built together with Google
Flexible runtime ☁️: Temporian programs can run seamlessly in-process in Python, on large datasets using Apache Beam.
-
Kafka cluster loses or duplicates messages
To perform the tests I'm using a Kafka cluster on Kubernetes from the Beam repo (here).
- Apache Beam
-
Real Time Data Infra Stack
Apache Beam: Streaming framework which can be run on several runner such as Apache Flink and GCP Dataflow
-
Google Cloud Reference
Apache Beam: Batch/streaming data processing 🔗Link
-
Composer out of resources - "INFO Task exited with return code Negsignal.SIGKILL"
What you are looking for is Dataflow. It can be a bit tricky to wrap your head around at first, but I highly suggest leaning into this technology for most of your data engineering needs. It's based on the open source Apache Beam framework that originated at Google. We use an internal version of this system at Google for virtually all of our pipeline tasks, from a few GB, to Exabyte scale systems -- it can do it all.
-
Pub/Sub parallel processing best practices
That being said, there is a learning curve in understanding how Apache Beam works. Take a look at the beam website for more information.
data-engineer-roadmap
- Pitanje za data engineering?
-
How should I start learning/implementing DevOps in data engineering projects?
In DevOps tools I've worked with GitHub + Jenkins, GitLab + k8s, and I'm now primarily working in the Argo Stack. Depending on where you're at technically, you might use something different. IaC is a ust as well, maybe some config management. Generally I've found that as a Data Engineer with a lot of infra/CICD knowledge, I generally get pigeonholed into those positions on a team, so be prepared for that. I really like this roadmap for DevOps , so you can see where your tech skills are at currently, and what you may need to learn. On top of that, you'll need to learn some data tools. Airflow + dbt is hot right now, Argo is sometimes used in MLOps, Azure Data Stack (I'm not familiar with it) seems common, and probably Spark in almost all cases. You can also checkout in visualization tools probably further down the line, I generally stick to something free when learning on my own, Superset or Google Data Studio (Might be Looker Studio now? Not sure, it's been a while). Here's a roadmap for DE too. I love these roadmaps for getting started, but don't let them distract you from exploring a path more appropriate to what you want to achieve. Generally I've found that as a Data Enigneer with a lot of infra/CICD knowledge, I generally get pigeonholed into those positions on a team
- What is roadmap to enter into data engineering?
- Need help on Data Engineering Roadmap
-
Woman interested in data engineering with Python background
Anyways, sorry bit of a rant - I land somewhere in the middle. I would say take formal classes and resources when you can. If you have access to a free course a semester, that's incredible in my opinion. If I were in your shoes, I would follow a roadmap and see if there are courses that check off a box in that roadmap. So for example, you know you need to learn CS fundamentals - see if you can take a DSA class or something. Or take a class on databases. Or an OOP or databases class. I would take those classes if I had the opportunity just because I didn't when I was in college. No one course will check every box for sure.
- 1 Year Development Plan
- How to utilise SQL/Data engineering skills
-
Got my first DE role as a JR
I don't remember all of the name of the courses but I think this roadmap can put you in the right direction https://github.com/datastacktv/data-engineer-roadmap
- What things must I master as a data engineer?
-
What do you do professionally and how much do you earn?
You can follow this roadmap https://github.com/datastacktv/data-engineer-roadmap I have already replied some redditors with suggestions, you can read them.
What are some alternatives?
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
golang-developer-roadmap - Roadmap to becoming a Go developer in 2020
Apache Hadoop - Apache Hadoop
developer-roadmap - Interactive roadmaps, guides and other educational content to help developers grow in their careers.
Scio - A Scala API for Apache Beam and Google Cloud Dataflow.
Data-Science-Roadmap - Data Science Roadmap from A to Z
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
adventofcode - :christmas_tree: Advent of Code (2015-2023) in C#
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
materialize - The data warehouse for operational workloads.
Apache Hive - Apache Hive
Apache HBase - Apache HBase