data-engineer-roadmap
Apache Hadoop
Our great sponsors
data-engineer-roadmap | Apache Hadoop | |
---|---|---|
68 | 26 | |
11,939 | 14,316 | |
1.3% | 0.8% | |
0.0 | 9.9 | |
about 2 years ago | about 7 hours ago | |
Java | ||
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
data-engineer-roadmap
- Pitanje za data engineering?
-
How should I start learning/implementing DevOps in data engineering projects?
In DevOps tools I've worked with GitHub + Jenkins, GitLab + k8s, and I'm now primarily working in the Argo Stack. Depending on where you're at technically, you might use something different. IaC is a ust as well, maybe some config management. Generally I've found that as a Data Engineer with a lot of infra/CICD knowledge, I generally get pigeonholed into those positions on a team, so be prepared for that. I really like this roadmap for DevOps , so you can see where your tech skills are at currently, and what you may need to learn. On top of that, you'll need to learn some data tools. Airflow + dbt is hot right now, Argo is sometimes used in MLOps, Azure Data Stack (I'm not familiar with it) seems common, and probably Spark in almost all cases. You can also checkout in visualization tools probably further down the line, I generally stick to something free when learning on my own, Superset or Google Data Studio (Might be Looker Studio now? Not sure, it's been a while). Here's a roadmap for DE too. I love these roadmaps for getting started, but don't let them distract you from exploring a path more appropriate to what you want to achieve. Generally I've found that as a Data Enigneer with a lot of infra/CICD knowledge, I generally get pigeonholed into those positions on a team
- What is roadmap to enter into data engineering?
- Need help on Data Engineering Roadmap
-
Woman interested in data engineering with Python background
Anyways, sorry bit of a rant - I land somewhere in the middle. I would say take formal classes and resources when you can. If you have access to a free course a semester, that's incredible in my opinion. If I were in your shoes, I would follow a roadmap and see if there are courses that check off a box in that roadmap. So for example, you know you need to learn CS fundamentals - see if you can take a DSA class or something. Or take a class on databases. Or an OOP or databases class. I would take those classes if I had the opportunity just because I didn't when I was in college. No one course will check every box for sure.
- 1 Year Development Plan
- How to utilise SQL/Data engineering skills
-
Got my first DE role as a JR
I don't remember all of the name of the courses but I think this roadmap can put you in the right direction https://github.com/datastacktv/data-engineer-roadmap
- What things must I master as a data engineer?
-
What do you do professionally and how much do you earn?
You can follow this roadmap https://github.com/datastacktv/data-engineer-roadmap I have already replied some redditors with suggestions, you can read them.
Apache Hadoop
-
Getting thousands of files of output back from a container
Did you check out tools like https://hadoop.apache.org/ ?
-
Trying to run hadoop using docker
check out the various dockerfiles bundled with hadoop on GitHub. you can point to them from within docker-compose. they haven't been updated in a couple years tho.
- Unveiling the Analytics Industry in Bangalore
-
5 Best Practices For Data Integration To Boost ROI And Efficiency
There are different ways to implement parallel dataflows, such as using parallel data processing frameworks like Apache Hadoop, Apache Spark, and Apache Flink, or using cloud-based services like Amazon EMR and Google Cloud Dataflow. It is also possible to use parallel dataflow frameworks to handle big data and distributed computing, like Apache Nifi and Apache Kafka.
- Hadoop or Spark?
-
Data Engineering and DataOps: A Beginner's Guide to Building Data Solutions and Solving Real-World Challenges
There are several frameworks available for batch processing, such as Hadoop, Apache Storm, and DataTorrent RTS.
-
Effortlessly Set Up a Hadoop Multi-Node Cluster on Windows Machines with Our Step-by-Step Guide
A copy of Hadoop installed on each of these machines. You can download Hadoop from the Apache website, or you can use a distribution like Cloudera or Hortonworks.
-
In One Minute : Hadoop
The Apache™ Hadoop™ project develops open-source software for reliable, scalable, distributed computing.
-
Elon Musk dissolves Twitter's board of directors
So, clearly with your AP CS class and PLC logic knowledge, if you were dumped into a codebase like Hadoop, QT, or TensorFlow you'd be able to quickly and competently analyze what is going on with that code, understand all the libraries used, know the reasons why certain compromises were made, and be able to make suggestions on how to restructure the code in a different way? Because I've been programming for coming up on two decades and unless a system is within the domains that I have experience in, I would not be able to provide any useful information without a massive onboarding timeline, and definitely wouldn't be able to help redesign anything until actually coding within the system for a significant amount of time.
-
A peek into Location Data Science at Ola
This requires the use of distributed computation tools such as Spark and Hadoop, Flink and Kafka are used. But for occasional experimentation, Pandas, Geopandas and Dask are some of the commonly used tools.
What are some alternatives?
golang-developer-roadmap - Roadmap to becoming a Go developer in 2020
Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]
developer-roadmap - Interactive roadmaps, guides and other educational content to help developers grow in their careers.
Ceph - Ceph is a distributed object, block, and file storage platform
Data-Science-Roadmap - Data Science Roadmap from A to Z
Seaweed File System - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding. [Moved to: https://github.com/seaweedfs/seaweedfs]
adventofcode - :christmas_tree: Advent of Code (2015-2023) in C#
Weka
materialize - The data warehouse for operational workloads.
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
Apache HBase - Apache HBase
GlusterFS - Web Content for gluster.org -- Deprecated as of September 2017