etl-markup-toolkit
mara-pipelines
Our great sponsors
etl-markup-toolkit | mara-pipelines | |
---|---|---|
7 | 3 | |
5 | 2,054 | |
- | 0.4% | |
0.0 | 6.0 | |
about 3 years ago | 4 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
etl-markup-toolkit
-
How do you serialize and save "transformations" in your pipeline?
I have a side project (https://github.com/leozqin/etl-markup-toolkit, if you're interested) that takes transformations as yaml files and outputs step-level logs about each step of the transformation. I've always felt that both artifacts could made searchable using an ELK stack or something... Do you have similar artifacts? Or perhaps there's a way to turn SQL into a structured or semi-structured form to aid in searchability
-
Alternative tools to DBT / SQL and Python for writing business logic? Trying to prevent creating a mountain of undocumented spaghetti
My current side project (https://github.com/leozqin/etl-markup-toolkit) is a low code way to express transformations as configuration and run it on pyspark. It also supports abstraction so you can call business logic like a function and has step-level reporting you can load into a metadata table. Usual disclaimers about OSS apply, although I'm happy to answer questions and take contributions.
-
How to keep track of the different Transformations done in an ETL pipeline?
Not sure if it meets your exact requirements, but I maintain an open source project that enables spark transformations as configuration, and part of that capability is reporting, including logging of columns in vs columns out, row counts, etc... It's very early stage but perhaps could be useful - https://github.com/leozqin/etl-markup-toolkit
- ETL Markup Toolkit - a spark native tool for describing etl transformations as configuration
- ETL Markup Toolkit - a Spark-native tool for expressing ETL transformations as configuration
mara-pipelines
-
How to keep track of the different Transformations done in an ETL pipeline?
The closest I've found is Mara but not what I'm after.
-
Using PostgreSQL as a Data Warehouse
The tooling behind the approach has been built as a set of python package named Mara. It is available at GitHub:
https://github.com/mara/mara-pipelines
And additional packages can be found at the Mara org:
https://github.com/mara
-
Build your own “data lake” for reporting purposes
Minio and nifi, require machines by themselves. Better off pure python and if obe wants sonething lighweight and visually pleasing Mara [0] or Dagster with Dagit [1] will do the job
[0] https://github.com/mara/mara-pipelines
[1] https://docs.dagster.io/tutorial/execute
What are some alternatives?
PySpark-Boilerplate - A boilerplate for writing PySpark Jobs
abcd-hcp-pipeline - bids application for processing functional MRI data, robust to scanner, acquisition and age variability.
quinn - pyspark methods to enhance developer productivity 📣 👯 🎉
kuwala - Kuwala is the no-code data platform for BI analysts and engineers enabling you to build powerful analytics workflows. We are set out to bring state-of-the-art data engineering tools you love, such as Airbyte, dbt, or Great Expectations together in one intuitive interface built with React Flow. In addition we provide third-party data into data science models and products with a focus on geospatial data. Currently, the following data connectors are available worldwide: a) High-resolution demographics data b) Point of Interests from Open Street Map c) Google Popular Times
sparkmagic - Jupyter magics and kernels for working with remote Spark clusters
pybaseball - Pull current and historical baseball statistics using Python (Statcast, Baseball Reference, FanGraphs)
tdigest - t-Digest data structure in Python. Useful for percentiles and quantiles, including distributed enviroments like PySpark
dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
dremio-oss - Dremio - the missing link in modern data
airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
citus - Distributed PostgreSQL as an extension
sgr - sgr (command line client for Splitgraph) and the splitgraph Python library