quinn
etl-markup-toolkit
Our great sponsors
quinn | etl-markup-toolkit | |
---|---|---|
9 | 7 | |
573 | 5 | |
- | - | |
9.2 | 0.0 | |
5 days ago | about 3 years ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
quinn
-
Brainstorming functions to make PySpark easier
We're brainstorming functions to make PySpark easier, see this issue: https://github.com/MrPowers/quinn/issues/83
-
PySpark OSS Contribution Opportunity
Adding some README documentation to the README should be quite straightforward. Here's a function that needs to be documented: https://github.com/MrPowers/quinn/issues/52 .
-
Invitation to collaborate on open source PySpark projects
quinn is a library with PySpark helper functions. I need to work through all the open issues / PRs and bump all versions. I should do another release. This library gets around 600,000 monthly downloads.
-
Pyspark now provides a native Pandas API
Pandas syntax is far inferior to regular PySpark in my opinion. Goes to show how much data analysts value a syntax that they're already familiar with. Pandas syntax makes it harder to reason about queries, abstract DataFrame transformations, etc. I've authored some popular PySpark libraries like quinn and chispa and am not excited to add Pandas syntax support, haha.
-
Register Native Functions in PySpark
Here's how I added a create_df method to the SparkSession class: https://github.com/MrPowers/quinn/blob/main/quinn/extensions/spark_session_ext.py
-
Is Spark - The Defenitive Guide outdated?
They spent a lot of effort improving the catalyst engine under the hood too and making it easier to extend and improve it in the future. Making it easy to add your own native code to Spark itself. Shameless plug of a blog post I wrote on this subject which basically reiterates what Matthew Powers, author of Spark Daria and quinn, wrote here.
-
Ask HN: What are some tools / libraries you built yourself?
I built daria (https://github.com/MrPowers/spark-daria) to make it easier to write Spark and spark-fast-tests (https://github.com/MrPowers/spark-fast-tests) to provide a good testing workflow.
quinn (https://github.com/MrPowers/quinn) and chispa (https://github.com/MrPowers/chispa) are the PySpark equivalents.
Built bebe (https://github.com/MrPowers/bebe) to expose the Spark Catalyst expressions that aren't exposed to the Scala / Python APIs.
Also build spark-sbt.g8 to create a Spark project with a single command: https://github.com/MrPowers/spark-sbt.g8
-
Open source contributions for a Data Engineer?
I've built popular PySpark (quinn, chispa) and Scala Spark (spark-daria, spark-fast-tests) libraries.
etl-markup-toolkit
-
How do you serialize and save "transformations" in your pipeline?
I have a side project (https://github.com/leozqin/etl-markup-toolkit, if you're interested) that takes transformations as yaml files and outputs step-level logs about each step of the transformation. I've always felt that both artifacts could made searchable using an ELK stack or something... Do you have similar artifacts? Or perhaps there's a way to turn SQL into a structured or semi-structured form to aid in searchability
-
Alternative tools to DBT / SQL and Python for writing business logic? Trying to prevent creating a mountain of undocumented spaghetti
My current side project (https://github.com/leozqin/etl-markup-toolkit) is a low code way to express transformations as configuration and run it on pyspark. It also supports abstraction so you can call business logic like a function and has step-level reporting you can load into a metadata table. Usual disclaimers about OSS apply, although I'm happy to answer questions and take contributions.
-
How to keep track of the different Transformations done in an ETL pipeline?
Not sure if it meets your exact requirements, but I maintain an open source project that enables spark transformations as configuration, and part of that capability is reporting, including logging of columns in vs columns out, row counts, etc... It's very early stage but perhaps could be useful - https://github.com/leozqin/etl-markup-toolkit
- ETL Markup Toolkit - a spark native tool for describing etl transformations as configuration
- ETL Markup Toolkit - a Spark-native tool for expressing ETL transformations as configuration
What are some alternatives?
chispa - PySpark test helper methods with beautiful error messages
mara-pipelines - A lightweight opinionated ETL framework, halfway between plain scripts and Apache Airflow
spark-daria - Essential Spark extensions and helper methods ✨😲
PySpark-Boilerplate - A boilerplate for writing PySpark Jobs
spark-rapids - Spark RAPIDS plugin - accelerate Apache Spark with GPUs
sparkmagic - Jupyter magics and kernels for working with remote Spark clusters
null - Nullable Go types that can be marshalled/unmarshalled to/from JSON.
tdigest - t-Digest data structure in Python. Useful for percentiles and quantiles, including distributed enviroments like PySpark
fugue - A unified interface for distributed computing. Fugue executes SQL, Python, Pandas, and Polars code on Spark, Dask and Ray without any rewrites.
lowdefy - The config web stack for business apps - build internal tools, client portals, web apps, admin panels, dashboards, web sites, and CRUD apps with YAML or JSON.
flintrock - A command-line tool for launching Apache Spark clusters.
dagster - An orchestration platform for the development, production, and observation of data assets.