mack
quinn
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mack
-
Implementing and using SCD Type 2
There still library form databricks? But I have never used it: https://github.com/MrPowers/mack
-
Spark/databricks seems amazing?
I was a Databricks user for 5 years and spent almost all my time inside the IntelliJ IDE developing code. I wrote almost all code in a text editor, unit tested all code (actually authored the popular Scala Spark / PySpark testing libraries: https://github.com/MrPowers/) and had everything up with CI/CD. Lots of OSS PySpark/Scala Spark work too. I only used Databricks notebooks for data exploration and for lightweight notebooks that would invoke functions (that were defined in Python Wheel / JAR files). I am on the Delta Lake team at Databricks now and still do all my work in text editors (see this project: https://github.com/MrPowers/mack) and create lots of examples in Jupyter Notebooks. So I definitely think it's possible to limit notebook exposure.
-
PySpark OSS Contribution Opportunity
Great, would love your help. You can also check out the mack project if you'd like to work on a Delta Lake + PySpark project: https://github.com/MrPowers/mack/issues
-
Spark open source community is awesome
a couple devs just added a `find_compositite_keys_candidates` function so users can easily identify columns that could be used as a unique identifier in their Delta table.
-
How to append data to Delta tables without adding any duplicates
Fair points. Here's the code repo: https://github.com/MrPowers/mack
quinn
-
Brainstorming functions to make PySpark easier
We're brainstorming functions to make PySpark easier, see this issue: https://github.com/MrPowers/quinn/issues/83
-
PySpark OSS Contribution Opportunity
Adding some README documentation to the README should be quite straightforward. Here's a function that needs to be documented: https://github.com/MrPowers/quinn/issues/52 .
-
Invitation to collaborate on open source PySpark projects
quinn is a library with PySpark helper functions. I need to work through all the open issues / PRs and bump all versions. I should do another release. This library gets around 600,000 monthly downloads.
-
Pyspark now provides a native Pandas API
Pandas syntax is far inferior to regular PySpark in my opinion. Goes to show how much data analysts value a syntax that they're already familiar with. Pandas syntax makes it harder to reason about queries, abstract DataFrame transformations, etc. I've authored some popular PySpark libraries like quinn and chispa and am not excited to add Pandas syntax support, haha.
-
Register Native Functions in PySpark
Here's how I added a create_df method to the SparkSession class: https://github.com/MrPowers/quinn/blob/main/quinn/extensions/spark_session_ext.py
-
Is Spark - The Defenitive Guide outdated?
They spent a lot of effort improving the catalyst engine under the hood too and making it easier to extend and improve it in the future. Making it easy to add your own native code to Spark itself. Shameless plug of a blog post I wrote on this subject which basically reiterates what Matthew Powers, author of Spark Daria and quinn, wrote here.
-
Ask HN: What are some tools / libraries you built yourself?
I built daria (https://github.com/MrPowers/spark-daria) to make it easier to write Spark and spark-fast-tests (https://github.com/MrPowers/spark-fast-tests) to provide a good testing workflow.
quinn (https://github.com/MrPowers/quinn) and chispa (https://github.com/MrPowers/chispa) are the PySpark equivalents.
Built bebe (https://github.com/MrPowers/bebe) to expose the Spark Catalyst expressions that aren't exposed to the Scala / Python APIs.
Also build spark-sbt.g8 to create a Spark project with a single command: https://github.com/MrPowers/spark-sbt.g8
-
Open source contributions for a Data Engineer?
I've built popular PySpark (quinn, chispa) and Scala Spark (spark-daria, spark-fast-tests) libraries.
What are some alternatives?
chispa - PySpark test helper methods with beautiful error messages
delta-rs - A native Rust library for Delta Lake, with bindings into Python
spark-daria - Essential Spark extensions and helper methods ✨😲
os-lib - OS-Lib is a simple, flexible, high-performance Scala interface to common OS filesystem and subprocess APIs
spark-rapids - Spark RAPIDS plugin - accelerate Apache Spark with GPUs
jodie - Delta lake and filesystem helper methods
null - Nullable Go types that can be marshalled/unmarshalled to/from JSON.
fugue - A unified interface for distributed computing. Fugue executes SQL, Python, Pandas, and Polars code on Spark, Dask and Ray without any rewrites.
etl-markup-toolkit - ETL Markup Toolkit is a spark-native tool for expressing ETL transformations as configuration
lowdefy - The config web stack for business apps - build internal tools, client portals, web apps, admin panels, dashboards, web sites, and CRUD apps with YAML or JSON.
flintrock - A command-line tool for launching Apache Spark clusters.