parsley
Apache Spark
parsley | Apache Spark | |
---|---|---|
2 | 101 | |
161 | 38,469 | |
- | 0.8% | |
7.8 | 10.0 | |
3 days ago | 5 days ago | |
Scala | Scala | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
parsley
-
How do I remove the forward reference error in my parser? (20 lines)
Or alternatively my own https://github.com/j-mie6/parsley for a more Haskell-style library - it has a wiki that discusses a lot of the main ideas, including how to deal with Def/Val/lazy val
-
What do I need to start writing an Extension or Template Haskell?
Depends on your existing knowledge of Haskell and stuff like monads, applicatives, etc. I haven't gotten around to writing a tutorial for Parser Combinators yet (I'd actually like to write a book about them at some point) in Haskell, but I do have this wiki here ( https://github.com/j-mie6/Parsley/wiki/Guide-to-Parser-Combinators ) for my parser combinator library in Scala, that might be of some help. A Haskell version of a lot of the later material there can be found in this paper https://dl.acm.org/doi/10.1145/3471874.3472984. The paper assumes some familiarity with Parser Combinators, the wiki does not (but is written in Scala): it's the resource I use to teach my 2nd year undergrads about Parser Combinators for their compilers project. It doesn't talk about monads/applicatives at all. I'm more than happy to answer any questions you have about either of those two.
Apache Spark
- "xAI will open source Grok"
-
Groovy 🎷 Cheat Sheet - 01 Say "Hello" from Groovy
Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post 😉.
-
🦿🛴Smarcity garbage reporting automation w/ ollama
Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system
-
Go concurrency simplified. Part 4: Post office as a data pipeline
also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc.
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
Apache Spark VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Integrate Pyspark Structured Streaming with confluent-kafka
Apache Spark - https://spark.apache.org/
-
Spark – A micro framework for creating web applications in Kotlin and Java
A JVM based framework named "Spark", when https://spark.apache.org exists?
- Rest in Peas: The Unrecognized Death of Speech Recognition (2010)
-
PySpark SparkSession Builder with Kubernetes Master
I recently saw a pull request that was merged to the Apache/Spark repository that apparently adds initial Python bindings for PySpark on K8s. I posted a comment to the PR asking a question about how to use spark-on-k8s in a Python Jupyter notebook, and was told to ask my question here.
What are some alternatives?
Fast Parse - Writing Fast Parsers Fast in Scala
Trino - Official repository of Trino, the distributed SQL query engine for big data, former
scala.meta - Library to read, analyze, transform and generate Scala programs
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
feel-scala - FEEL parser and interpreter written in Scala
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
scallion - LL(1) parser combinators in Scala
Scalding - A Scala API for Cascading
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
Weka