ZparkIO
frameless
ZparkIO | frameless | |
---|---|---|
1 | 9 | |
173 | 870 | |
- | 0.0% | |
3.2 | 8.1 | |
17 days ago | about 21 hours ago | |
Scala | Scala | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ZparkIO
frameless
-
for comprehension and some questions
I don't see how Spark is any "less controversial" when the Spark Delay instance for cats-effect takes an entire SparkSession implicitly.
-
Why use Spark at all?
To add to this I lately have used Spark with frameless for compile time safety and it's an interesting library that works well with Spark.
-
Guide for Apache Spark Setup, Job Optimisation, AWS EMR Cluster Configuration, S3, YARN and HDFS Optimisation
For type safety with dataframes, techniques like https://github.com/typelevel/frameless can be used.
-
Spark scala v/s pyspark
The preferred way to write Spark programs is to use DataFrame API which is untyped and is essentially the same in Scala, C# and Python. It's a DSL that's used to describe AST of the computation and the end result is the same regardless of language. There's a library called Frameless (https://github.com/typelevel/frameless) that implements typed DataFrame API but it is not in wide use, it looked dead for quite some time (though now development seems to continue) and didn't play nice with IntelliJ IDEA last time I checked. Performance-wise there's no difference most of the time (since all the program does is create an AST) except when using UDFs - Python UDFs are significantly slower and you can't write "proper" UDFs in Python - ones that generate Java code.
-
Does anyone here (intentionally) use Scala without an effects library such as Cats or ZIO? Or without going "full Haskell"?
Frameless is a nice way to grab some type safety back from Spark, and features opt-in Cats integration.
-
Making the Spark DataFrame composition type safe(r)
Valid point! Have you seen the withColumnTupled API? It returns a typed tuple instead. This seems to satisfy your use case - the dataset preserves its type and doesn't require a new case class. This is kind of what you're suggesting but without case class generation. Though not sure whether attribute labels (names) are preserved in this case. It's also unclear whether this is good enough for wide tables.
-
Recommendations for specializing in Spark (Scala)
I recommend using Frameless, which includes a Cats module. In general, I would encourage you to master “purely” functional programming first, because it’s foundational. Spark is a very specific technology, and probably not even the best in that class today—I would be very careful about trying to build a career around it.
What are some alternatives?
zio-spark - A functional wrapper around Spark to make it works with ZIO
Lantern
zio-prelude - A lightweight, distinctly Scala take on functional abstractions, with tight ZIO integration
spark-excel - A Spark plugin for reading and writing Excel files
zio-http - A next-generation Scala framework for building scalable, correct, and efficient HTTP clients and servers
deequ - Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
zio-akka-cluster - ZIO wrapper for Akka Cluster
azure-kusto-spark - Apache Spark Connector for Azure Kusto
snowpark-scala-template - Scala project template for Snowpark development
bebe - Filling in the Spark function gaps across APIs
zio-entity - Zio-Entity, a distributed, high performance, functional event sourcing library
cats-effect - The pure asynchronous runtime for Scala