os-lib
chispa
Our great sponsors
os-lib | chispa | |
---|---|---|
3 | 11 | |
603 | 376 | |
1.7% | - | |
8.4 | 0.0 | |
3 days ago | 26 days ago | |
Scala | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
os-lib
-
Scala CLI v1.0.0 is out!
Then something like os-lib can probably cover your scripting needs.
-
Spark open source community is awesome
another dev is working on adding an elegant interface to perform Hadoop filesystem operations, similar to os-lib for regular filesystem operations
-
What learning path did you follow into Scala?
In particular, the ammonite REPL and os-lib are absolutely ace.
chispa
-
Spark open source community is awesome
here's a little README fix a user pushed to chispa
-
Invitation to collaborate on open source PySpark projects
chispa is a library of PySpark testing functions.
-
installing pyspark on my m1 mac, getting an env error
The other approach I've used is Poetry, see the chispa project as an example. Poetry is especially nice for projects that you'd like to publish to PyPi because those commands are built-in.
-
Spark: local dev environment
- All Spark transformations are tested with pytest + chispa (https://github.com/MrPowers/chispa)
-
Pyspark now provides a native Pandas API
Pandas syntax is far inferior to regular PySpark in my opinion. Goes to show how much data analysts value a syntax that they're already familiar with. Pandas syntax makes it harder to reason about queries, abstract DataFrame transformations, etc. I've authored some popular PySpark libraries like quinn and chispa and am not excited to add Pandas syntax support, haha.
-
Show dataengineering: beavis, a library for unit testing Pandas/Dask code
I am the author of spark-fast-tests and chispa, libraries for unit testing Scala Spark / PySpark code.
-
Tips for building popular open source data engineering projects
Blogging has been the main way I've been able to attract users. Someone searches "testing PySpark", they see this blog, and then they're motivated to try chispa.
-
Ask HN: What are some tools / libraries you built yourself?
I built daria (https://github.com/MrPowers/spark-daria) to make it easier to write Spark and spark-fast-tests (https://github.com/MrPowers/spark-fast-tests) to provide a good testing workflow.
quinn (https://github.com/MrPowers/quinn) and chispa (https://github.com/MrPowers/chispa) are the PySpark equivalents.
Built bebe (https://github.com/MrPowers/bebe) to expose the Spark Catalyst expressions that aren't exposed to the Scala / Python APIs.
Also build spark-sbt.g8 to create a Spark project with a single command: https://github.com/MrPowers/spark-sbt.g8
-
Open source contributions for a Data Engineer?
I've built popular PySpark (quinn, chispa) and Scala Spark (spark-daria, spark-fast-tests) libraries.
-
Why Databricks Is Winning
The last point was for teams that only rely on notebooks, sorry if I didn't make that clear.
You're right that all those issues can be sidestepped if you build projects in version controlled Git repos, test the code, and deploy JAR / Wheel files.
Speaking of testing, can you let me know if this PySpark testing fix worked for you ;) https://github.com/MrPowers/chispa/issues/6
What are some alternatives?
spark-fast-tests - Apache Spark testing helpers (dependency free & works with Scalatest, uTest, and MUnit)
spark-daria - Essential Spark extensions and helper methods ✨😲
quinn - pyspark methods to enhance developer productivity 📣 👯 🎉
lowdefy - The easiest config web stack on top of Next.js - build internal tools, web apps, admin panels, BI dashboards, web sites, and CRUD apps with YAML or JSON.
dagster - An orchestration platform for the development, production, and observation of data assets.
null - Nullable Go types that can be marshalled/unmarshalled to/from JSON.
fugue - A unified interface for distributed computing. Fugue executes SQL, Python, Pandas, and Polars code on Spark, Dask and Ray without any rewrites.
Eso - A (mostly) purely functional console-based esoteric language interpreter.
pfps-shopping-cart - :shopping_cart: The Shopping Cart application developed in the book "Practical FP in Scala: A hands-on approach"
spark-rapids - Spark RAPIDS plugin - accelerate Apache Spark with GPUs
leapp - Leapp is the DevTool to access your cloud
meltano