ngods-stocks
data-engineering-zoomcamp
ngods-stocks | data-engineering-zoomcamp | |
---|---|---|
3 | 119 | |
373 | 22,811 | |
- | 3.4% | |
0.0 | 9.4 | |
over 1 year ago | 28 days ago | |
Jupyter Notebook | Jupyter Notebook | |
BSD 3-clause "New" or "Revised" License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ngods-stocks
-
I'm way over my head
I've worked for 3-4 years in positions where I helped structure ETLs, DWs and alike. However, I'm now on the cusp of being hired to help structure the area in a big investment fund here, helping the research area have an easier time focusing on their models. My previous experience led me to grasp DBT, SQL, and most of my experience came from using a Microsoft stack with SSIS, Analysis Services and the like. I'm feeling wayyyy over my head to start building this, and the multitude of possible stacks make me very afraid that I might overengineer this, and I will initially be alone in the area. What do I do? Fake it till I make it? I never lied in my resume, so it's not like they expect a senior with plenty of experience but still... I read this: https://github.com/zsvoboda/ngods-stocks And it seems like a good starter, albeit overly complex for our use case. I could use suggestions, people to talk to, etc. Please help
-
Apache Iceberg-based opensource analytics stack demo
Hi, I've created an opensource demo of a Docker-based local analytics stack that includes Apache Iceberg, Trino, Spark, Dagster (orchestration), Cube.dev (analytics model), Metabase (reports and dashboards), and Jupyter (data science notebook). I think that this is a pretty good starting point for Iceberg projects. Feel free to check it out at GitHub.
-
Iceberg + Spark + Trino + Dagster: modern, open-source data stack installation
I’m guessing that you use the Spark JDBC dataframes. Trino is in my opinion easier to use. You get SQL access to all pgsql tables with this simple config file. No need to write a piece of code for each table. The config above just maps the pgsql schema to a Trino schema. Then you configure Iceberg with another config file and you can do cross-schema SQL queries like create table pgsql.xyz from select * from iceberg.abc. Or you can use dbt that is based on SQL.
data-engineering-zoomcamp
-
Data Engineering Zoomcamp Week 6 - using redpanda 1
References: Data engineering zoomcamp week 6 course and homework notes: https://github.com/DataTalksClub/data-engineering-zoomcamp/tree/main/cohorts/2024/06-streaming
-
Final project part 5
dbt is the main part of my data engineering project for Data Talks Club's data engineering zoomcamp. After a few frustrating errors on my part, I finally figured out how to make models, where to put the staging models and where to put the core models, how to compile a seed file, and how to join it to the main file in order to produce data for visualization. I also used the git interface to continually upgrade my repository. This was extremely convenient and helpful.
-
Building a project in DBT
For Week 4 of DataTalksClub's data engineering zoomcamp, we had to install dbt and create a project. This was a formidable task. dbt is a data transformation tool that enables data analysts and engineers to transform data in a cloud analytics warehouse, BigQuery in our case. It took me a very long time to do this, and in this case I needed the homework extension.
-
Testing and documenting DBT models
In this video we learned how to test and document dbt models. We also learned about the codegen library. This is part of Week 4 of the data engineering zoomcamp by DataTalksClub.
-
Extracting data with dlt
If you want to run these commands yourself, either in a Jupyter notebook or in Google Colab, you can get the file from HERE. You can get an overview of the workshop HERE. When I ran in a Jupyter notebook, I had to delete the first line (%%capture) and put quotes around dlt[duckdb] in the second line.
-
Data engineering at home?
Take a look.DE zoomcamp
-
Rockstar Data Engineers making big bucks: what are you doing exactly?
If you need guidance you can attend the data engineering zoomcamp, it's free and quite solid.
-
Self study material
Welcome. Start with Data Engineering Zoomcamp, try and build a project, see if you like it, then continue to get into deeper resources.
-
What is the best way to learn Python if I want to become a data engineer
Can take a look at this - https://github.com/DataTalksClub/data-engineering-zoomcamp
-
Course Recommendations for a New Grad
I think you can start with something free with this pretty practical course on Data Engineering from DataTalksClub - https://github.com/DataTalksClub/data-engineering-zoomcamp
What are some alternatives?
practical-data-engineering - Practical Data Engineering: A Hands-On Real-Estate Project Guide
mlops-zoomcamp - Free MLOps course from DataTalks.Club
amazon-emr-with-delta-lake - Amazon EMR Notebook to show how to read from and write to Delta tables with Amazon EMR
Cookbook - The Data Engineering Cookbook
synapse-azure-data-explorer-101 - Getting started with Azure Synapse and Azure Data Explorer
AdventureWorks - Projects using the AdventureWorks database
dbt-metabase - dbt + Metabase integration
versatile-data-kit - One framework to develop, deploy and operate data workflows with Python and SQL.
udacity_bike_share_datalake_project - Azure Data Lake
Reddit-API-Pipeline
H2O - H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.
udacity-capstone