spark-daria VS chispa

Compare spark-daria vs chispa and see what are their differences.

spark-daria

Essential Spark extensions and helper methods ✨😲 (by MrPowers)

chispa

PySpark test helper methods with beautiful error messages (by MrPowers)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
spark-daria chispa
4 12
742 508
- -
0.0 6.7
about 2 years ago 4 days ago
Scala Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

spark-daria

Posts with mentions or reviews of spark-daria. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-13.
  • Lakehouse architecture in Azure Synapse without Databricks?
    2 projects | /r/dataengineering | 13 Apr 2023
    I was a Databricks user for 5 years and spent 95% of my time developing Spark code in IDEs. See the spark-daria and spark-fast-tests projects as Scala examples. I developed internal libraries with all the business logic. The Databricks notebooks would consist of a few lines of code that would invoke a function in the proprietary Spark codebase. The proprietary Spark codebase would depend on the OSS libraries I developed in parallel.
  • Is Spark - The Defenitive Guide outdated?
    2 projects | /r/apachespark | 1 Jul 2021
    They spent a lot of effort improving the catalyst engine under the hood too and making it easier to extend and improve it in the future. Making it easy to add your own native code to Spark itself. Shameless plug of a blog post I wrote on this subject which basically reiterates what Matthew Powers, author of Spark Daria and quinn, wrote here.
  • Ask HN: What are some tools / libraries you built yourself?
    264 projects | news.ycombinator.com | 16 May 2021
    I built daria (https://github.com/MrPowers/spark-daria) to make it easier to write Spark and spark-fast-tests (https://github.com/MrPowers/spark-fast-tests) to provide a good testing workflow.

    quinn (https://github.com/MrPowers/quinn) and chispa (https://github.com/MrPowers/chispa) are the PySpark equivalents.

    Built bebe (https://github.com/MrPowers/bebe) to expose the Spark Catalyst expressions that aren't exposed to the Scala / Python APIs.

    Also build spark-sbt.g8 to create a Spark project with a single command: https://github.com/MrPowers/spark-sbt.g8

  • Open source contributions for a Data Engineer?
    17 projects | /r/dataengineering | 16 Apr 2021
    I've built popular PySpark (quinn, chispa) and Scala Spark (spark-daria, spark-fast-tests) libraries.

chispa

Posts with mentions or reviews of chispa. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-12-29.

What are some alternatives?

When comparing spark-daria and chispa you can also consider the following projects:

quinn - pyspark methods to enhance developer productivity 📣 👯 🎉

spark-fast-tests - Apache Spark testing helpers (dependency free & works with Scalatest, uTest, and MUnit)

Task - A task runner / simpler Make alternative written in Go

Prefect - The easiest way to build, run, and monitor data pipelines at scale.

lowdefy - The config web stack for business apps - build internal tools, client portals, web apps, admin panels, dashboards, web sites, and CRUD apps with YAML or JSON.

null - Nullable Go types that can be marshalled/unmarshalled to/from JSON.

dagster - An orchestration platform for the development, production, and observation of data assets.

airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.

fugue - A unified interface for distributed computing. Fugue executes SQL, Python, Pandas, and Polars code on Spark, Dask and Ray without any rewrites.