cobrix
spark-excel
cobrix | spark-excel | |
---|---|---|
1 | 8 | |
133 | 439 | |
0.0% | 1.6% | |
8.2 | 8.6 | |
8 days ago | 9 days ago | |
Scala | Scala | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cobrix
-
I don't even feel bad for them...
Wait. Oh. Oh no. No no no...
spark-excel
- Pandas was faster and less memory intensive then crealytics pyspark. How is it possible?
-
Automating Excel to Databricks Table
Not natively. But the com.crealytics.spark.excel library has had great results for us. There are still some cases where pandas manipulation is needed with Excel files that have weird header setups.
-
Can AWS Glue convert a JSON payload to Excel tab? (not csv)
Spark 3.2 has a to_excel() method, but not Spark 3.1, so you'll need to use an external library such as https://github.com/crealytics/spark-excel
-
Reading a xlsx file with PySpark
Have you checked spark-excel's documentation? The dataAddress option seems to be what you're looking for.
-
read percentage values in spark ( no casting )
Are you using this library to load xlsx files? https://github.com/crealytics/spark-excel
-
Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.executor.memory;
I found similiar issues on their github: https://github.com/crealytics/spark-excel/issues/227
-
How do I learn to read a plug-in?
Plug-in in question is GitHub - crealytics/spark-excel: A Spark plugin for reading Excel files via Apache POI , but I guess it could be any. Assuming that I can read the plain code in an individual .scala file how do I learn to understand how it all pieces together and what the underlying code being run is?
What are some alternatives?
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
frameless - Expressive types for Spark.
delta - An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs
SynapseML - Simple and Distributed Machine Learning
Quill - Compile-time Language Integrated Queries for Scala
spark-nlp - State of the Art Natural Language Processing
deequ - Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
COBOL-Guide - COBOL Guide
unlock-mainframe-data-files-on-aws - This solution is designed to help you unlock legacy mainframe data by migrating data files from mainframe systems to AWS. By migrating the data, you can make use of the powerful analytics, machine learning, and other services available in AWS to gain insights and make better decisions based on the data.
metorikku - A simplified, lightweight ETL Framework based on Apache Spark