frameless
Trino
frameless | Trino | |
---|---|---|
9 | 44 | |
870 | 9,576 | |
0.0% | 1.8% | |
8.1 | 10.0 | |
1 day ago | 4 days ago | |
Scala | Java | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
frameless
-
for comprehension and some questions
I don't see how Spark is any "less controversial" when the Spark Delay instance for cats-effect takes an entire SparkSession implicitly.
-
Why use Spark at all?
To add to this I lately have used Spark with frameless for compile time safety and it's an interesting library that works well with Spark.
-
Guide for Apache Spark Setup, Job Optimisation, AWS EMR Cluster Configuration, S3, YARN and HDFS Optimisation
For type safety with dataframes, techniques like https://github.com/typelevel/frameless can be used.
-
Spark scala v/s pyspark
The preferred way to write Spark programs is to use DataFrame API which is untyped and is essentially the same in Scala, C# and Python. It's a DSL that's used to describe AST of the computation and the end result is the same regardless of language. There's a library called Frameless (https://github.com/typelevel/frameless) that implements typed DataFrame API but it is not in wide use, it looked dead for quite some time (though now development seems to continue) and didn't play nice with IntelliJ IDEA last time I checked. Performance-wise there's no difference most of the time (since all the program does is create an AST) except when using UDFs - Python UDFs are significantly slower and you can't write "proper" UDFs in Python - ones that generate Java code.
-
Does anyone here (intentionally) use Scala without an effects library such as Cats or ZIO? Or without going "full Haskell"?
Frameless is a nice way to grab some type safety back from Spark, and features opt-in Cats integration.
-
Making the Spark DataFrame composition type safe(r)
Valid point! Have you seen the withColumnTupled API? It returns a typed tuple instead. This seems to satisfy your use case - the dataset preserves its type and doesn't require a new case class. This is kind of what you're suggesting but without case class generation. Though not sure whether attribute labels (names) are preserved in this case. It's also unclear whether this is good enough for wide tables.
-
Recommendations for specializing in Spark (Scala)
I recommend using Frameless, which includes a Cats module. In general, I would encourage you to master “purely” functional programming first, because it’s foundational. Spark is a very specific technology, and probably not even the best in that class today—I would be very careful about trying to build a career around it.
Trino
- Trino: Fast distributed SQL query engine for big data analytics
-
Game analytic power: how we process more than 1 billion events per day
We decided not to waste time reinventing the wheel and simply installed Trino on our servers. It’s a full featured SQL query engine that works on your data. Now our analysts can use it to work with data from AppMetr and execute queries at different levels of complexity.
-
Your Thoughts on OLAPs Clickhouse vs Apache Druid vs Starrocks in 2023/2024
DevRel for StarRocks. Trino doesn't have a great caching layer (https://github.com/trinodb/trino/pull/16375) and performance (https://github.com/trinodb/trino/issues/14237) and https://github.com/oap-project/Gluten-Trino. In benchmarks and community user testing, StarRocks has outperformed.
-
Making Hard Things Easy
What if my SQL engine is Presto, Trino [1], or a similar query engine? If it's federating multiple source databases we peel the SQL back and get... SQL? Or you peel the SQL back and get... S3 + Mongo + Hadoop? Junior analysts would work at 1/10th the speed if they had to use those raw.
[1] https://trino.io/
- Trino, a open query engine that runs at ludicrous speed
-
Questions about Athena, Trino and Iceberg
The good thing is that the concepts in terms to the SQL supported by Trino transfers between them all. So its completely reasonable to start with one and move to another. In fact that is something that happens regularly. I invite to you check out the talks from the Trino Fest event that is just wrapping up today. There are presentations about all these aspects and different scenarios users encounter. All videos and slides will go live on the Trino website soon. Also feel free to join the Trino slack to chat about about all this with other users.
-
Multi-Databases across Multiple Servers - MySQL
There are distributed query engines like Trino that help with this sort of problem https://trino.io/
-
Iceberg on Cloudtrail Logs with Athena
This issue in particular is a killer for me: https://github.com/trinodb/trino/issues/10974
-
Data Lake, Real-time Analytics, or Both? Exploring Presto and ClickHouse
AFAIK Presto was forked and Trino https://trino.io/ is now the leading SQL Query engine .
-
Apache Iceberg as storage for on-premise data store (cluster)
Trino or Hive for SQL querying. Get Trino/Hive to talk to Nessie.
What are some alternatives?
Lantern
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
spark-excel - A Spark plugin for reading and writing Excel files
dremio-oss - Dremio - the missing link in modern data
deequ - Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
Presto - The official home of the Presto distributed SQL query engine for big data
azure-kusto-spark - Apache Spark Connector for Azure Kusto
Apache Drill - Apache Drill is a distributed MPP query layer for self describing data
bebe - Filling in the Spark function gaps across APIs
Apache Calcite - Apache Calcite
cats-effect - The pure asynchronous runtime for Scala
ClickHouse - ClickHouse® is a free analytics DBMS for big data