frameless
deequ
frameless | deequ | |
---|---|---|
9 | 17 | |
870 | 3,126 | |
0.0% | 0.6% | |
8.1 | 7.4 | |
about 23 hours ago | 14 days ago | |
Scala | Scala | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
frameless
-
for comprehension and some questions
I don't see how Spark is any "less controversial" when the Spark Delay instance for cats-effect takes an entire SparkSession implicitly.
-
Why use Spark at all?
To add to this I lately have used Spark with frameless for compile time safety and it's an interesting library that works well with Spark.
-
Guide for Apache Spark Setup, Job Optimisation, AWS EMR Cluster Configuration, S3, YARN and HDFS Optimisation
For type safety with dataframes, techniques like https://github.com/typelevel/frameless can be used.
-
Spark scala v/s pyspark
The preferred way to write Spark programs is to use DataFrame API which is untyped and is essentially the same in Scala, C# and Python. It's a DSL that's used to describe AST of the computation and the end result is the same regardless of language. There's a library called Frameless (https://github.com/typelevel/frameless) that implements typed DataFrame API but it is not in wide use, it looked dead for quite some time (though now development seems to continue) and didn't play nice with IntelliJ IDEA last time I checked. Performance-wise there's no difference most of the time (since all the program does is create an AST) except when using UDFs - Python UDFs are significantly slower and you can't write "proper" UDFs in Python - ones that generate Java code.
-
Does anyone here (intentionally) use Scala without an effects library such as Cats or ZIO? Or without going "full Haskell"?
Frameless is a nice way to grab some type safety back from Spark, and features opt-in Cats integration.
-
Making the Spark DataFrame composition type safe(r)
Valid point! Have you seen the withColumnTupled API? It returns a typed tuple instead. This seems to satisfy your use case - the dataset preserves its type and doesn't require a new case class. This is kind of what you're suggesting but without case class generation. Though not sure whether attribute labels (names) are preserved in this case. It's also unclear whether this is good enough for wide tables.
-
Recommendations for specializing in Spark (Scala)
I recommend using Frameless, which includes a Cats module. In general, I would encourage you to master “purely” functional programming first, because it’s foundational. Spark is a very specific technology, and probably not even the best in that class today—I would be very careful about trying to build a career around it.
deequ
-
[Data Quality] Deequ Feedback request
There's no straightforward way to drop and rerun a metric collection. For example, say you detect a problem in your data. You fix it, rerun the pipeline, and replace the bad data with the good. You'd want your metrics history to reflect the true state of your data. But the "bad run" cannot be dropped. Issue
-
Thoughts on a business rules engine
I had similar requirements for QA reporting on large and diverse data sets. I implemented data check pipelines, with rules in AWS Deequ (https://github.com/awslabs/deequ) running on an Apache Spark cluster. The Deequ worked well for me, but there were a few cases where I opted to write the rule checks in the data store to improve throughput (i.e. SQL checks on critical data elements on the database).
-
Building a data quality solution for devs and business people
Hey all! At the companies where I've worked as a developer, I've found that business stakeholders typically want a concrete way to check and assure the quality of data that pipelines are producing, before other downstream systems and users get impacted. I've tested solutions like Deequ, but I found that it made building compliance and data rules a bit more complicated and put a greater emphasis on developers to get the rules right that business was expecting. I also experienced issues with running checks in parallel and getting row level details about the failures.
-
deequ VS cuallee - a user suggested alternative
2 projects | 30 Nov 2022
- November 15-19, 2022 FLiP Stack Weekly
- What are your favourite GitHub repos that shows how data engineering should be done?
- Well designed scala/spark project
-
Soda Core (OSS) is now GA! So, why should you add checks to your data pipelines?
GE is arguably the most well known OSS alternative to Soda Core. The third option is deequ, originally developed and released in OSS by AWS. Our community has told us that Soda Core is different because it’s easy to get going and embed into data pipelines. And it also allows some of the check authoring work to be moved to other members of the data team. I'm sure there are also scenarios where Soda Core is not the best option. For example, when you only use Pandas dataframes or develop in Scala.
-
Congrats on hitting the v1 milestone, whylabs! You're r/MLOps OSS tool of the month!
I wonder how this compares with tools like DeeQu (https://github.com/awslabs/python-deequ - requires Spark) or Pandas Profiling? One plus side I can see is that it doesn't require Apache Spark to run profiling (though a quick look at the code indicates that they are working on Spark support) and can work with real time systems.
-
What companies/startups are using Scala (open source projects on github)?
There are so many of them in big data, e.g. Kafka, Spark, Flink, Delta, Snowplow, Finagle, Deequ, CMAK, OpenWhisk, Snowflake, TheHive, TVM-VTA, etc.
What are some alternatives?
Lantern
soda-sql - Data profiling, testing, and monitoring for SQL accessible data.
spark-excel - A Spark plugin for reading and writing Excel files
azure-kusto-spark - Apache Spark Connector for Azure Kusto
dbt-data-reliability - dbt package that is part of Elementary, the dbt-native data observability solution for data & analytics engineers. Monitor your data pipelines in minutes. Available as self-hosted or cloud service with premium features.
bebe - Filling in the Spark function gaps across APIs
Quill - Compile-time Language Integrated Queries for Scala
cats-effect - The pure asynchronous runtime for Scala
BigDL - Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, etc.
typeclassopedia - My tinkering to understand the typeclassopedia.
re_data - re_data - fix data issues before your users & CEO would discover them 😊