Apache-Spark-Guide
pyspark-example-project
Apache-Spark-Guide | pyspark-example-project | |
---|---|---|
2 | 1 | |
28 | 1,370 | |
- | - | |
1.8 | 0.0 | |
over 2 years ago | over 1 year ago | |
Python | Python | |
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Apache-Spark-Guide
pyspark-example-project
-
Learning Pyspark for a new role
https://github.com/AlexIoannides/pyspark-example-project You can use this as an example to organize your project. I have referred to this in the past.
What are some alternatives?
anovos - Anovos - An Open Source Library for Scalable feature engineering Using Apache-Spark
soda-spark - Soda Spark is a PySpark library that helps you with testing your data in Spark Dataframes
Traffic-Data-Analysis-with-Apache-Spark-Based-on-Mobile-Robot-Data - Mobile robot data were analyzed with Apache-Spark to extract five different statistical result such as travel time, waiting time, average speed, occupancy and density were produced.
patterns-devkit - Data pipelines from re-usable components
livyc - Apache Spark as a Service with Apache Livy Client
hamilton - Hamilton helps data scientists and engineers define testable, modular, self-documenting dataflows, that encode lineage and metadata. Runs and scales everywhere python does.
Optimus - :truck: Agile Data Preparation Workflows made easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySpark
Mage - 🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai