etl-markup-toolkit
pyspark-example-project
etl-markup-toolkit | pyspark-example-project | |
---|---|---|
7 | 1 | |
5 | 1,370 | |
- | - | |
0.0 | 0.0 | |
about 3 years ago | over 1 year ago | |
Python | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
etl-markup-toolkit
-
How do you serialize and save "transformations" in your pipeline?
I have a side project (https://github.com/leozqin/etl-markup-toolkit, if you're interested) that takes transformations as yaml files and outputs step-level logs about each step of the transformation. I've always felt that both artifacts could made searchable using an ELK stack or something... Do you have similar artifacts? Or perhaps there's a way to turn SQL into a structured or semi-structured form to aid in searchability
-
Alternative tools to DBT / SQL and Python for writing business logic? Trying to prevent creating a mountain of undocumented spaghetti
My current side project (https://github.com/leozqin/etl-markup-toolkit) is a low code way to express transformations as configuration and run it on pyspark. It also supports abstraction so you can call business logic like a function and has step-level reporting you can load into a metadata table. Usual disclaimers about OSS apply, although I'm happy to answer questions and take contributions.
-
How to keep track of the different Transformations done in an ETL pipeline?
Not sure if it meets your exact requirements, but I maintain an open source project that enables spark transformations as configuration, and part of that capability is reporting, including logging of columns in vs columns out, row counts, etc... It's very early stage but perhaps could be useful - https://github.com/leozqin/etl-markup-toolkit
- ETL Markup Toolkit - a spark native tool for describing etl transformations as configuration
- ETL Markup Toolkit - a Spark-native tool for expressing ETL transformations as configuration
pyspark-example-project
-
Learning Pyspark for a new role
https://github.com/AlexIoannides/pyspark-example-project You can use this as an example to organize your project. I have referred to this in the past.
What are some alternatives?
mara-pipelines - A lightweight opinionated ETL framework, halfway between plain scripts and Apache Airflow
soda-spark - Soda Spark is a PySpark library that helps you with testing your data in Spark Dataframes
PySpark-Boilerplate - A boilerplate for writing PySpark Jobs
Apache-Spark-Guide - Apache Spark Guide
quinn - pyspark methods to enhance developer productivity 📣 👯 🎉
patterns-devkit - Data pipelines from re-usable components
sparkmagic - Jupyter magics and kernels for working with remote Spark clusters
hamilton - Hamilton helps data scientists and engineers define testable, modular, self-documenting dataflows, that encode lineage and metadata. Runs and scales everywhere python does.
tdigest - t-Digest data structure in Python. Useful for percentiles and quantiles, including distributed enviroments like PySpark
Mage - 🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai
TypedPyspark - Type-annotate your spark dataframes and validate them
dados-censup - Automação da ingestão de dados disponibilizados pelo INEP referente ao censo superior da educacão brasileira.