data-engineer-challenge
livyc
data-engineer-challenge | livyc | |
---|---|---|
3 | 2 | |
25 | 3 | |
- | - | |
1.8 | 0.0 | |
almost 2 years ago | over 1 year ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
data-engineer-challenge
-
Data Engineering Projects for Beginners
Design, Development and Deployment of a simple Data Pipeline
- Design, Development and Deployment of a simple Data Pipeline
livyc
What are some alternatives?
docker-livy - Dockerizing and Consuming an Apache Livy environment
Traffic-Data-Analysis-with-Apache-Spark-Based-on-Mobile-Robot-Data - Mobile robot data were analyzed with Apache-Spark to extract five different statistical result such as travel time, waiting time, average speed, occupancy and density were produced.
distance-metrics - Distance metrics are one of the most important parts of some machine learning algorithms, supervised and unsupervised learning, it will help us to calculate and measure similarities between numerical values expressed as data points
Apache-Spark-Guide - Apache Spark Guide
Dropout-Students-Prediction - The goal of this project is to identify students at risk of dropping out the school
pyDag - Scheduling Big Data Workloads and Data Pipelines in the Cloud with pyDag
text-analysis-speeches-amlo - Text analysis of the speeches, conferences and interviews of the current president of Mexico
yaetos - Write data & AI pipelines in (SQL, Spark, Pandas) and deploy to the cloud, simplified
data-engineering-challenge-th - Dockerizing a Python Script for Web Scraping and consume the scraped data using FastApi (www.metroscubicos.com)
pyspark-on-aws-emr - The goal of this project is to offer an AWS EMR template using Spot Fleet and On-Demand Instances that you can use quickly. Just focus on writing pyspark code.
covid-19-data-engineering-pipeline - A Covid-19 data pipeline on AWS featuring PySpark/Glue, Docker, Great Expectations, Airflow, and Redshift, templated in CloudFormation and CDK, deployable via Github Actions.