mack
chispa
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mack
-
Implementing and using SCD Type 2
There still library form databricks? But I have never used it: https://github.com/MrPowers/mack
-
Spark/databricks seems amazing?
I was a Databricks user for 5 years and spent almost all my time inside the IntelliJ IDE developing code. I wrote almost all code in a text editor, unit tested all code (actually authored the popular Scala Spark / PySpark testing libraries: https://github.com/MrPowers/) and had everything up with CI/CD. Lots of OSS PySpark/Scala Spark work too. I only used Databricks notebooks for data exploration and for lightweight notebooks that would invoke functions (that were defined in Python Wheel / JAR files). I am on the Delta Lake team at Databricks now and still do all my work in text editors (see this project: https://github.com/MrPowers/mack) and create lots of examples in Jupyter Notebooks. So I definitely think it's possible to limit notebook exposure.
-
PySpark OSS Contribution Opportunity
Great, would love your help. You can also check out the mack project if you'd like to work on a Delta Lake + PySpark project: https://github.com/MrPowers/mack/issues
-
Spark open source community is awesome
a couple devs just added a `find_compositite_keys_candidates` function so users can easily identify columns that could be used as a unique identifier in their Delta table.
-
How to append data to Delta tables without adding any duplicates
Fair points. Here's the code repo: https://github.com/MrPowers/mack
chispa
-
Testing spark applications
Unit and e2e tests using a combination of pytest and chispa (https://github.com/MrPowers/chispa). Custom library to create random test data that fits schema with optional hardcoded overrides for relevant fields to test business logic.
-
Spark open source community is awesome
here's a little README fix a user pushed to chispa
-
Invitation to collaborate on open source PySpark projects
chispa is a library of PySpark testing functions.
-
installing pyspark on my m1 mac, getting an env error
The other approach I've used is Poetry, see the chispa project as an example. Poetry is especially nice for projects that you'd like to publish to PyPi because those commands are built-in.
-
Spark: local dev environment
- All Spark transformations are tested with pytest + chispa (https://github.com/MrPowers/chispa)
-
Pyspark now provides a native Pandas API
Pandas syntax is far inferior to regular PySpark in my opinion. Goes to show how much data analysts value a syntax that they're already familiar with. Pandas syntax makes it harder to reason about queries, abstract DataFrame transformations, etc. I've authored some popular PySpark libraries like quinn and chispa and am not excited to add Pandas syntax support, haha.
-
Show dataengineering: beavis, a library for unit testing Pandas/Dask code
I am the author of spark-fast-tests and chispa, libraries for unit testing Scala Spark / PySpark code.
-
Tips for building popular open source data engineering projects
Blogging has been the main way I've been able to attract users. Someone searches "testing PySpark", they see this blog, and then they're motivated to try chispa.
-
Ask HN: What are some tools / libraries you built yourself?
I built daria (https://github.com/MrPowers/spark-daria) to make it easier to write Spark and spark-fast-tests (https://github.com/MrPowers/spark-fast-tests) to provide a good testing workflow.
quinn (https://github.com/MrPowers/quinn) and chispa (https://github.com/MrPowers/chispa) are the PySpark equivalents.
Built bebe (https://github.com/MrPowers/bebe) to expose the Spark Catalyst expressions that aren't exposed to the Scala / Python APIs.
Also build spark-sbt.g8 to create a Spark project with a single command: https://github.com/MrPowers/spark-sbt.g8
-
Open source contributions for a Data Engineer?
I've built popular PySpark (quinn, chispa) and Scala Spark (spark-daria, spark-fast-tests) libraries.
What are some alternatives?
delta-rs - A native Rust library for Delta Lake, with bindings into Python
spark-fast-tests - Apache Spark testing helpers (dependency free & works with Scalatest, uTest, and MUnit)
os-lib - OS-Lib is a simple, flexible, high-performance Scala interface to common OS filesystem and subprocess APIs
spark-daria - Essential Spark extensions and helper methods ✨😲
jodie - Delta lake and filesystem helper methods
quinn - pyspark methods to enhance developer productivity 📣 👯 🎉
lowdefy - The config web stack for business apps - build internal tools, client portals, web apps, admin panels, dashboards, web sites, and CRUD apps with YAML or JSON.
null - Nullable Go types that can be marshalled/unmarshalled to/from JSON.
dagster - An orchestration platform for the development, production, and observation of data assets.
fugue - A unified interface for distributed computing. Fugue executes SQL, Python, Pandas, and Polars code on Spark, Dask and Ray without any rewrites.
meltano
leapp - Leapp is the DevTool to access your cloud