frameless
Expressive types for Spark. (by typelevel)
spark-excel
A Spark plugin for reading and writing Excel files (by crealytics)
Our great sponsors
frameless | spark-excel | |
---|---|---|
9 | 8 | |
868 | 433 | |
-0.2% | 2.1% | |
8.2 | 8.6 | |
1 day ago | 8 days ago | |
Scala | Scala | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
frameless
Posts with mentions or reviews of frameless.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-01-22.
-
for comprehension and some questions
I don't see how Spark is any "less controversial" when the Spark Delay instance for cats-effect takes an entire SparkSession implicitly.
-
Why use Spark at all?
To add to this I lately have used Spark with frameless for compile time safety and it's an interesting library that works well with Spark.
-
Guide for Apache Spark Setup, Job Optimisation, AWS EMR Cluster Configuration, S3, YARN and HDFS Optimisation
For type safety with dataframes, techniques like https://github.com/typelevel/frameless can be used.
-
Spark scala v/s pyspark
The preferred way to write Spark programs is to use DataFrame API which is untyped and is essentially the same in Scala, C# and Python. It's a DSL that's used to describe AST of the computation and the end result is the same regardless of language. There's a library called Frameless (https://github.com/typelevel/frameless) that implements typed DataFrame API but it is not in wide use, it looked dead for quite some time (though now development seems to continue) and didn't play nice with IntelliJ IDEA last time I checked. Performance-wise there's no difference most of the time (since all the program does is create an AST) except when using UDFs - Python UDFs are significantly slower and you can't write "proper" UDFs in Python - ones that generate Java code.
-
Does anyone here (intentionally) use Scala without an effects library such as Cats or ZIO? Or without going "full Haskell"?
Frameless is a nice way to grab some type safety back from Spark, and features opt-in Cats integration.
-
Making the Spark DataFrame composition type safe(r)
Valid point! Have you seen the withColumnTupled API? It returns a typed tuple instead. This seems to satisfy your use case - the dataset preserves its type and doesn't require a new case class. This is kind of what you're suggesting but without case class generation. Though not sure whether attribute labels (names) are preserved in this case. It's also unclear whether this is good enough for wide tables.
-
Recommendations for specializing in Spark (Scala)
I recommend using Frameless, which includes a Cats module. In general, I would encourage you to master “purely” functional programming first, because it’s foundational. Spark is a very specific technology, and probably not even the best in that class today—I would be very careful about trying to build a career around it.
spark-excel
Posts with mentions or reviews of spark-excel.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-17.
- Pandas was faster and less memory intensive then crealytics pyspark. How is it possible?
-
Automating Excel to Databricks Table
Not natively. But the com.crealytics.spark.excel library has had great results for us. There are still some cases where pandas manipulation is needed with Excel files that have weird header setups.
-
Can AWS Glue convert a JSON payload to Excel tab? (not csv)
Spark 3.2 has a to_excel() method, but not Spark 3.1, so you'll need to use an external library such as https://github.com/crealytics/spark-excel
-
Reading a xlsx file with PySpark
Have you checked spark-excel's documentation? The dataAddress option seems to be what you're looking for.
-
read percentage values in spark ( no casting )
Are you using this library to load xlsx files? https://github.com/crealytics/spark-excel
-
Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.executor.memory;
I found similiar issues on their github: https://github.com/crealytics/spark-excel/issues/227
-
How do I learn to read a plug-in?
Plug-in in question is GitHub - crealytics/spark-excel: A Spark plugin for reading Excel files via Apache POI , but I guess it could be any. Assuming that I can read the plain code in an individual .scala file how do I learn to understand how it all pieces together and what the underlying code being run is?
What are some alternatives?
When comparing frameless and spark-excel you can also consider the following projects:
Lantern
SynapseML - Simple and Distributed Machine Learning
deequ - Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
Quill - Compile-time Language Integrated Queries for Scala
azure-kusto-spark - Apache Spark Connector for Azure Kusto
bebe - Filling in the Spark function gaps across APIs
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
cats-effect - The pure asynchronous runtime for Scala
cobrix - A COBOL parser and Mainframe/EBCDIC data source for Apache Spark
typeclassopedia - My tinkering to understand the typeclassopedia.
metorikku - A simplified, lightweight ETL Framework based on Apache Spark