tdigest
t-Digest data structure in Python. Useful for percentiles and quantiles, including distributed enviroments like PySpark (by CamDavidsonPilon)
etl-markup-toolkit
ETL Markup Toolkit is a spark-native tool for expressing ETL transformations as configuration (by leozqin)
tdigest | etl-markup-toolkit | |
---|---|---|
- | 7 | |
376 | 5 | |
- | - | |
0.0 | 0.0 | |
12 months ago | about 3 years ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tdigest
Posts with mentions or reviews of tdigest.
We have used some of these posts to build our list of alternatives
and similar projects.
We haven't tracked posts mentioning tdigest yet.
Tracking mentions began in Dec 2020.
etl-markup-toolkit
Posts with mentions or reviews of etl-markup-toolkit.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-08-22.
-
How do you serialize and save "transformations" in your pipeline?
I have a side project (https://github.com/leozqin/etl-markup-toolkit, if you're interested) that takes transformations as yaml files and outputs step-level logs about each step of the transformation. I've always felt that both artifacts could made searchable using an ELK stack or something... Do you have similar artifacts? Or perhaps there's a way to turn SQL into a structured or semi-structured form to aid in searchability
-
Alternative tools to DBT / SQL and Python for writing business logic? Trying to prevent creating a mountain of undocumented spaghetti
My current side project (https://github.com/leozqin/etl-markup-toolkit) is a low code way to express transformations as configuration and run it on pyspark. It also supports abstraction so you can call business logic like a function and has step-level reporting you can load into a metadata table. Usual disclaimers about OSS apply, although I'm happy to answer questions and take contributions.
-
How to keep track of the different Transformations done in an ETL pipeline?
Not sure if it meets your exact requirements, but I maintain an open source project that enables spark transformations as configuration, and part of that capability is reporting, including logging of columns in vs columns out, row counts, etc... It's very early stage but perhaps could be useful - https://github.com/leozqin/etl-markup-toolkit
- ETL Markup Toolkit - a spark native tool for describing etl transformations as configuration
- ETL Markup Toolkit - a Spark-native tool for expressing ETL transformations as configuration
What are some alternatives?
When comparing tdigest and etl-markup-toolkit you can also consider the following projects:
t-digest - A new data structure for accurate on-line accumulation of rank-based statistics such as quantiles and trimmed means
mara-pipelines - A lightweight opinionated ETL framework, halfway between plain scripts and Apache Airflow
dpark - Python clone of Spark, a MapReduce alike framework in Python
PySpark-Boilerplate - A boilerplate for writing PySpark Jobs
distributed - A distributed task scheduler for Dask
quinn - pyspark methods to enhance developer productivity 📣 👯 🎉
sparkmagic - Jupyter magics and kernels for working with remote Spark clusters