Our great sponsors
-
llvm-project
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
-
Lark
Lark is a parsing toolkit for Python, built with a focus on ergonomics, performance and modularity.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
Crate
CrateDB is a distributed and scalable SQL database for storing and analyzing massive amounts of data in near real-time, even with complex queries. It is PostgreSQL-compatible, and based on Lucene.
It seems to me that the parsing code in clang is distributed over multiple files which together are way more than 3000 lines: https://github.com/llvm/llvm-project/tree/llvmorg-12.0.1/cla...
I know SPARK's docstring use influenced PLY.
PLY doesn't use Earley, but "Earley" does come up in the show notes of an interview with Beazley, PLY's author, at https://www.pythonpodcast.com/episode-95-parsing-and-parsers... . No transcript, and I'm not going to listen to it just to figure out the context.
https://github.com/lark-parser/lark "implements both Earley(SPPF) and LALR(1)".
Kegler, the author of that timeline I linked to, is the author of Marpa. Home page is http://savage.net.au/Marpa.html . The most recent HN comments about it are from a year ago, at https://news.ycombinator.com/item?id=24321395 .
When I switched from ANTLR to hand written for Adama ( http://www.adama-lang.org/ ), I felt way better about things. I was able to get sane error messages, and I could better annotate my syntax tree with comments and line/char numbers.
A killer feature for a parser generator would be the ability to auto-generate a pretty printer which requires stuffing comments into the tree as a "meta token".
I implemented an unparse function in IParse, which is not a parser generator, but a parser that interprets a grammar. See for example https://github.com/FransFaase/IParse/blob/master/software/c_... where symbols starting with a back slash are a kind of white space terminals during the unparse. For example, \inc stands for incrementing the indentation where \dec decrements it. The \s is used to indicate that at given location a space should be included.
The Ruby yacc file is scary to look at. 13+ thousand lines in a single file.
Would it be better with hand rolled and they could have abstracted and organized somethings or does it all make sense in its current format if you are familiar with it?
https://github.com/ruby/ruby/blob/v3_0_2/parse.y
Agreed! I would say that parser combinators are the sweet spot and the right choice in most cases.
Scala has them as well, e.g.: https://com-lihaoyi.github.io/fastparse/
And the good thing is, you don't have to learn a completely new language/syntax, you can use the host language's syntax and you have full IDE support as well.
Just read the code for an existing one like:
https://github.com/dlang/dmd/blob/master/src/dmd/cparse.d
which is a C parser. It's not hard to follow.