-
You might find the Nano Rust example in the repository interesting! It's a lexer, parser, and interpreter for a simple dynamically-typed Rust-inspired language. It's not intended as a fully-blown project template, but it's probably more than enough to get you started.
-
InfluxDB
InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
Thanks! It's worth noting that the diagnostics are not generated by Chumsky itself, but by another crate of mine named Ariadne. However, they can be used together if desired.
-
I use the author's pretty error rendering crate: ariadne, in statix, and it is a delight. Comfy API, loads of customization opts, and very pretty.
-
Nice to see support for error recovery with parser combinators! I never got to the point of adding it in combine as I swapped out my language parser(s) to use LALRPOP instead (implementing error recovery for it instead).
-
Nice to see support for error recovery with parser combinators! I never got to the point of adding it in combine as I swapped out my language parser(s) to use LALRPOP instead (implementing error recovery for it instead).
-
I saw the performance comparison against pom, pom is unfortunately quite slow compared to an handwritten parser as it boxes most (all?) parsers so you may want to compare against a handwritten parser, or at least something in the same ballpark (for reference, combine's json benchmark on the same data is about 6x faster with "good errors", when optimized to work on &str-like input it is about 12x faster, nom or a hand written parser may be another 10-20% faster than that, if I remember correctly.) From a brief skim of the code, I don't see anything that would hinder it from at least closing that gap however.
-
I saw the performance comparison against pom, pom is unfortunately quite slow compared to an handwritten parser as it boxes most (all?) parsers so you may want to compare against a handwritten parser, or at least something in the same ballpark (for reference, combine's json benchmark on the same data is about 6x faster with "good errors", when optimized to work on &str-like input it is about 12x faster, nom or a hand written parser may be another 10-20% faster than that, if I remember correctly.) From a brief skim of the code, I don't see anything that would hinder it from at least closing that gap however.
-
Stream
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
-
I switched to LALRPOP for gluon but I still use combine in https://github.com/mitsuhiko/redis-rs and some other projects which need to parse "protocols" (less need for good error messages/error recovery and more need for speed).
-
tao
A statically-typed functional language with generics, typeclasses, sum types, pattern-matching, first-class functions, currying, algebraic effects, associated types, good diagnostics, etc. (by zesterer)
Additionally, I just pushed some recent work on my own language, Tao, which uses Chumsky. Hopefully some nice proof that it's a viable API for parsing non-trivial syntax.