pyDag
livyc
Our great sponsors
pyDag | livyc | |
---|---|---|
2 | 2 | |
24 | 3 | |
- | - | |
0.0 | 0.0 | |
over 1 year ago | over 1 year ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pyDag
-
Data Engineering Projects for Beginners
Scheduling Big Data Workloads and Data Pipelines in the Cloud with pyDag
- How to build a DAG based Task Scheduling tool for Multiprocessor systems using python
livyc
What are some alternatives?
distance-metrics - Distance metrics are one of the most important parts of some machine learning algorithms, supervised and unsupervised learning, it will help us to calculate and measure similarities between numerical values expressed as data points
Traffic-Data-Analysis-with-Apache-Spark-Based-on-Mobile-Robot-Data - Mobile robot data were analyzed with Apache-Spark to extract five different statistical result such as travel time, waiting time, average speed, occupancy and density were produced.
docker-livy - Dockerizing and Consuming an Apache Livy environment
Apache-Spark-Guide - Apache Spark Guide
pubsub2inbox - Pubsub2Inbox is a versatile, multi-purpose tool to handle Pub/Sub messages and turn them into email, API calls, GCS objects, files or almost anything.
pyspark-on-aws-emr - The goal of this project is to offer an AWS EMR template using Spot Fleet and On-Demand Instances that you can use quickly. Just focus on writing pyspark code.
p_tqdm - Parallel processing with progress bars
yaetos - Write data & AI pipelines in (SQL, Spark, Pandas) and deploy to the cloud, simplified
breaking_cycles_in_noisy_hierarchies - breaking cycles in noisy hierarchies
data-engineer-challenge - Challenge Data Engineer
wbz - A parallel implementation of the bzip2 data compressor in python, this data compression pipeline is using algorithms like Burrows–Wheeler transform (BWT) and Move to front (MTF) to improve the Huffman compression. For now, this tool only will be focused on compressing .csv files, and other files on tabular format.
covid-19-data-engineering-pipeline - A Covid-19 data pipeline on AWS featuring PySpark/Glue, Docker, Great Expectations, Airflow, and Redshift, templated in CloudFormation and CDK, deployable via Github Actions.