Apache Hadoop
dbt-core
Our great sponsors
Apache Hadoop | dbt-core | |
---|---|---|
26 | 86 | |
14,316 | 8,906 | |
0.9% | 3.4% | |
9.9 | 9.7 | |
4 days ago | about 7 hours ago | |
Java | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Apache Hadoop
-
Getting thousands of files of output back from a container
Did you check out tools like https://hadoop.apache.org/ ?
-
Trying to run hadoop using docker
check out the various dockerfiles bundled with hadoop on GitHub. you can point to them from within docker-compose. they haven't been updated in a couple years tho.
- Unveiling the Analytics Industry in Bangalore
-
5 Best Practices For Data Integration To Boost ROI And Efficiency
There are different ways to implement parallel dataflows, such as using parallel data processing frameworks like Apache Hadoop, Apache Spark, and Apache Flink, or using cloud-based services like Amazon EMR and Google Cloud Dataflow. It is also possible to use parallel dataflow frameworks to handle big data and distributed computing, like Apache Nifi and Apache Kafka.
- Hadoop or Spark?
-
Data Engineering and DataOps: A Beginner's Guide to Building Data Solutions and Solving Real-World Challenges
There are several frameworks available for batch processing, such as Hadoop, Apache Storm, and DataTorrent RTS.
-
Effortlessly Set Up a Hadoop Multi-Node Cluster on Windows Machines with Our Step-by-Step Guide
A copy of Hadoop installed on each of these machines. You can download Hadoop from the Apache website, or you can use a distribution like Cloudera or Hortonworks.
-
In One Minute : Hadoop
The Apache™ Hadoop™ project develops open-source software for reliable, scalable, distributed computing.
-
Elon Musk dissolves Twitter's board of directors
So, clearly with your AP CS class and PLC logic knowledge, if you were dumped into a codebase like Hadoop, QT, or TensorFlow you'd be able to quickly and competently analyze what is going on with that code, understand all the libraries used, know the reasons why certain compromises were made, and be able to make suggestions on how to restructure the code in a different way? Because I've been programming for coming up on two decades and unless a system is within the domains that I have experience in, I would not be able to provide any useful information without a massive onboarding timeline, and definitely wouldn't be able to help redesign anything until actually coding within the system for a significant amount of time.
-
A peek into Location Data Science at Ola
This requires the use of distributed computation tools such as Spark and Hadoop, Flink and Kafka are used. But for occasional experimentation, Pandas, Geopandas and Dask are some of the commonly used tools.
dbt-core
- Dbt
-
Relational is more than SQL
dbt integration was one of our major goals early on but we found that the interaction wasn't as straightforward as had hoped.
There is an open PR in the dbt repo: https://github.com/dbt-labs/dbt-core/pull/5982#issuecomment-...
I have some ideas about future directions in this space where I believe PRQL could really shine. I will only be able to write those down in a couple of hours. I think this could be a really exciting direction for the project to grow into if anyone would like to collaborate and contribute!
-
How to Level Up Beyond ETLs: From Query Optimization to Code Generation
> Could you share more specific details? Happy to look over / revise where needed.
Sure thing! I'd say first off, the solutions may look different for a small company/startup vs. a large enterprise. It can help if you explain the scale at which you are solving for.
On the enterprise side of things, they tend to buy solutions rather than build them in-house. Things like Informatica, Talend, etc. are common for large enterprises whose primary products are not data or software related. They just don't have the will, expertise, or the capital to invest in building and maintaining these solutions in-house so they just buy them off the shelf. On the surface, these are very expensive products, but even in the face of that it can still make sense for large enterprises in terms of the bottom line to buy rather than build.
For startups and smaller companies, have you looked at something like `dbt` (https://github.com/dbt-labs/dbt-core) ? I understand the desire to write some code, but often times there are already existing solutions for the problems you might be encountering.
ORM's should typically only exist on the consumer-side of the equation, if at all. A lot of business intelligence / business analysts are just going to use tools like Tableau and hook up to the data warehouse via a connector to visualize their data. You might have some consumers that are more sophisticated and may want to write some custom post-processing or aggregation code, and they could certainly use ORM's if they choose, but it isn't something you should enforce on them because it's a poor place to validate data since as mentioned there are different ways/tools to access the data and not all of them are going to go through your python SDK.
Indeed in a large enough company, you are going to have producers and consumers that are going to use different tools and programming languages, so it's a little bit presumptuous to write an SDK in python there.
Another thing to talk about, and this probably mostly applies to larger companies - have you looked at an architecture like a distributed data mesh (https://martinfowler.com/articles/data-mesh-principles.html)? This might be something to bring to the CTO more than try to push for yourself, but it can completely change the landscape of what you are doing.
> More broadly is the issue of the gap of what you think the role is, and what the role actually is when you join. There are definitely cases where this is accidental. The best way I can think of to close the gap is to maybe do a short-term contract, but may be challenging to do under time constraints etc.
Yeah this definitely sucks and it's not an enviable position to be in. I guess you have a choice to look for another job or try to stick it out with the company that did this to you. It's possible there is a geniune existential crisis for the company and a good reason why they did the bait-and-switch. Maybe it pays to stay, especially if you have equity in the company. On the other hand, it could also be the case that it is the result of questionable practices at the company. It's hard to make that call.
-
Python: Just Write SQL
I really dislike SQL, but recognize its importance for many organizations. I also understand that SQL is definitely testable, particularly if managed by environments such as DBT (https://github.com/dbt-labs/dbt-core). Those who arrived here with preference to python will note that dbt is largely implemented in python, adds Jinja macros and iterative forms to SQL, and adds code testing capabilities.
-
Transform Your Data Like a Pro With dbt (Data Build Tool)
3). Data Build Tool Repository.
-
What are your thoughts on dbt Cloud vs other managed dbt Core platforms?
dbt Cloud rightfully gets a lot of credit for creating dbt Core and for being the first managed dbt Core platform, but there are several entrants in the market; from those who just run dbt jobs like Fivetran to platforms that offer more like EL + T like Mozart Data and Datacoves which also has hosted VS Code editor for dbt development and Airflow.
- How do I build a docker image based on a Dockerfile on github?
-
Dbt vs. SqlMesh
Ahh I misunderstood, yes column level lineage is useful. DBT prefers leveraging macros which sort of breaks this pattern. I think the DBT way would be to better separate fields into upstream models and use table tracking https://github.com/dbt-labs/dbt-core/discussions/4458
-
DBT core v1.5 released
Here’s the PR, which includes a what/how/why: https://github.com/dbt-labs/dbt-core/issues/7158
-
DBT Install
I've attached a link to their documentation. DBT is becoming increasingly popular within the Data Engineering community with over 5k stars on github.
What are some alternatives?
Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]
airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
Ceph - Ceph is a distributed object, block, and file storage platform
metricflow - MetricFlow allows you to define, build, and maintain metrics in code.
Seaweed File System - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding. [Moved to: https://github.com/seaweedfs/seaweedfs]
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Weka
n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
citus - Distributed PostgreSQL as an extension
GlusterFS - Web Content for gluster.org -- Deprecated as of September 2017
dagster - An orchestration platform for the development, production, and observation of data assets.