rebar VS aho-corasick

Compare rebar vs aho-corasick and see what are their differences.

rebar

A biased barometer for gauging the relative speed of some regex engines on a curated set of tasks. (by BurntSushi)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
rebar aho-corasick
22 21
197 950
- -
8.5 7.2
about 1 month ago about 1 month ago
Python Rust
The Unlicense The Unlicense
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

rebar

Posts with mentions or reviews of rebar. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-16.
  • Knuth–Morris–Pratt Illustrated
    2 projects | news.ycombinator.com | 16 Apr 2024
    https://github.com/BurntSushi/rebar

    For regex, you can't really distill it down to one single fastest algorithm.

    It's somewhat similar even for substring search. But certainly, the fastest algorithms are going to be the ones that make use of SIMD in some way.

  • Regex character "$" doesn't mean "end-of-string"
    1 project | news.ycombinator.com | 20 Mar 2024
    I'll add two notes to this:

    * Finite automata based regex engines don't necessarily have to be slower than backtracking engines like PCRE. Go's regexp is in practice slower in a lot of cases, but this is more a property of its implementation than its concept. See: https://github.com/BurntSushi/rebar?tab=readme-ov-file#summa... --- Given "sufficient" implementation effort, backtrackers and finite automata engines can both perform very well, with one beating the other in some cases but not in others. It depends.

    * Fun fact is that if you're iterating over all matches in a haystack (e.g., Go's `FindAll` routines), then you're susceptible to O(m * n^2) search time. This applies to all regex engines that implement some kind of leftmost match priority. See https://github.com/BurntSushi/rebar?tab=readme-ov-file#quadr... for a more detailed elaboration on this point.

  • Re2c
    4 projects | news.ycombinator.com | 22 Feb 2024
    They are extremely fast too: https://github.com/BurntSushi/rebar?tab=readme-ov-file#summa...
  • C# Regex engine is now 3rd fastest in the world
    3 projects | news.ycombinator.com | 31 Dec 2023
    I love the flourish of "in the world." I had never thought about it that way. Which makes me think if there are any regex engines that aren't in rebar that could conceivably by competitive with the top engines in rebar. I do maintained a WANTED list of engines[1], but none of them jump out to me except for maybe Nim's engine.

    Of course, there's also the question of whether the benchmarks are representative enough to make such extrapolations. I don't have a good answer for that one. All models are wrong, but, some are useful.

    [1]: https://github.com/BurntSushi/rebar/blob/96c6779b7e1cdd850b8...

  • Ugrep – a more powerful, ultra fast, user-friendly, compatible grep
    27 projects | news.ycombinator.com | 30 Dec 2023
    I'm the author of ripgrep and its regex engine.

    Your claim is true to a first approximation. But greps are line oriented, and that means there are optimizations that can be done that are hard to do in a general regex library.

    If you read my commentary in the ripgrep discussion above, you'll note that it isn't just about the benchmarks themselves being accurate, but the model they represent. Nevertheless, I linked the hypergrep benchmarks not because of Hyperscan, but because they were done by someone who isn't the author of either ripgrep or ugrep.

    As for regex benchmarks, you'll want to check out rebar: https://github.com/BurntSushi/rebar

    You can see my full thoughts around benchmark design and philosophy if you read the rebar documentation. Be warned though, you'll need some time.

    There is a fork of ripgrep with Hyperscan support: https://sr.ht/~pierrenn/ripgrep/

  • Translations of Russ Cox's Thompson NFA C Program to Rust
    3 projects | news.ycombinator.com | 2 Nov 2023
    Before getting to your actual question, it might help to look at a regex benchmark that compares engines (perhaps JITs are not the fastest in all cases!): https://github.com/BurntSushi/rebar

    In particular, the `regex-lite` engine is strictly just the PikeVM without any frills. No prefilters or literal optimizations. No other engines. Just the PikeVM.

    As to your question, the PikeVM is, essentially, an NFA simulation. The PikeVM just refers to the layering of capture state on top of the NFA simulation. But you can peel back the capture state and you're still left with a slow NFA simulation. I mention this because you seem to compare the PikeVM with "big graph structures with NFAs/DFAs." But the PikeVM is using a big NFA graph structure.

    At a very high level, the time complexity of a Thompson NFA simulation and a DFA hints strongly at the answer to your question: searching with a Thompson NFA has worst case O(m*n) time while a DFA has worst case O(n) time, where m is proportional to the size of the regex and n is proportional to the size of the haystack. That is, for each character of the haystack, the Thompson NFA is potentially doing up to `m` amount of work. And indeed, in practice, it really does need to do some work for each character.

    A Thompson NFA simulation needs to keep track of every state it is simultaneously in at any given point. And in order to compute the transition function, you need to compute it for every state you're in. The epsilon transitions that are added as part of the Thompson NFA construction (and are, crucially, what make building a Thompson NFA so fast) exacerbate this. So what happens is that you wind up chasing epsilon transitions over and over for each character.

    A DFA pre-computes these epsilon closures during powerset construction. Of course, that takes worst case O(2^m) time, which is why real DFAs aren't really used in general purpose engines. Instead, lazy DFAs are used.

    As for things like V8, they are backtrackers. They don't need to keep track of every state they're simultaneously in because they don't mind taking a very long time to complete some searches. But in practice, this can make them much faster for some inputs.

    Feel free to ask more questions. I'll stop here.

  • Compile time regular expression in C++
    5 projects | news.ycombinator.com | 12 Sep 2023
    I'd love for someone to add this to rebar[1] so that we can get a good sense of how well it does against other general purpose regex engines. It will be a little tricky to add (since the build step will require emitting a C++ program and compiling it), but it should be possible.

    [1]: https://github.com/BurntSushi/rebar

  • Stringzilla: Fastest string sort, search, split, and shuffle using SIMD
    9 projects | news.ycombinator.com | 29 Aug 2023
  • Rust vs. Go in 2023
    9 projects | news.ycombinator.com | 13 Aug 2023
    https://github.com/BurntSushi/rebar#summary-of-search-time-b...

    Further, Go refusing to have macros means that many libraries use reflection instead, which often makes those parts of the Go program perform no better than Python and in some cases worse. Rust can just generate all of that at compile time with macros, and optimize them with LLVM like any other code. Some Go libraries go to enormous lengths to reduce reflection overhead, but that's hard to justify for most things, and hard to maintain even once done. The legendary https://github.com/segmentio/encoding seems to be abandoned now and progress on Go JSON in general seems to have died with https://github.com/go-json-experiment/json .

    Many people claiming their projects are IO-bound are just assuming that's the case because most of the time is spent in their input reader. If they actually measured they'd see it's not even saturating a 100Mbps link, let alone 1-100Gbps, so by definition it is not IO-bound. Even if they didn't need more throughput than that, they still could have put those cycles to better use or at worst saved energy. Isn't that what people like to say about Go vs Python, that Go saves energy? Sure, but it still burns a lot more energy than it would if it had macros.

    Rust can use state-of-the-art memory allocators like mimalloc, while Go is still stuck on an old fork of tcmalloc, and not just tcmalloc in its original C, but transpiled to Go so it optimizes much less than LLVM would optimize it. (Many people benchmarking them forget to even try substitute allocators in Rust, so they're actually underestimating just how much faster Rust is)

    Finally, even Go Generics have failed to improve performance, and in many cases can make it unimaginably worse through -- I kid you not -- global lock contention hidden behind innocent type assertion syntax: https://planetscale.com/blog/generics-can-make-your-go-code-...

    It's not even close. There are many reasons Go is a lot slower than Rust and many of them are likely to remain forever. Most of them have not seen meaningful progress in a decade or more. The GC has improved, which is great, but that's not even a factor on the Rust side.

  • A Regex Barometer
    1 project | /r/hypeurls | 5 Jul 2023

aho-corasick

Posts with mentions or reviews of aho-corasick. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-04.
  • Aho-Corasick Algorithm
    3 projects | news.ycombinator.com | 4 Mar 2024
  • Identifying Rust's collect:<Vec<_>>() memory leak footgun
    3 projects | news.ycombinator.com | 18 Jan 2024
    You can't build the contiguous variant directly from a sequence of patterns. You need some kind of intermediate data structure to incrementally build a trie in memory. The contiguous NFA needs to know the complete picture of each state in order to compress it into memory. It makes decisions like, "if the number of transitions of this state is less than N, then use this representation" or "use the most significant N bits of the state pointer to indicate its representation." It is difficult to do this in an online fashion, and likely impossible to do without some sort of compromise. For example, you don't know how many transitions each state has until you've completed construction of the trie. But how do you build the trie if the state representation needs to know the number of transitions?

    Note that the conversion from a non-contiguous NFA to a contiguous NFA is, relatively speaking, pretty cheap. The only real reason to not use a contiguous NFA is that it can't represent as many patterns as a non-contiguous NFA. (Because of the compression tricks it uses.)

    The interesting bits start here: https://github.com/BurntSushi/aho-corasick/blob/f227162f7c56...

  • Ask HN: What's the fastest programming language with a large standard library?
    9 projects | news.ycombinator.com | 26 Dec 2023
    Right. I pointed it out because it isn't just about having portable SIMD that makes SIMD optimizations possible. Therefore, the lack of one in Rust doesn't have much explanatory power for why Rust's standard library doesn't contain SIMD. (It does have some.) It's good enough for things like memchr (well, kinda, NEON doesn't have `movemask`[1,2]), but not for things like Teddy that do multi-substring search. When you do want to write SIMD across platforms, it's not too hard to define your own bespoke portable API[3].

    I'm basically just pointing out that a portable API is somewhat oversold, because it's not uncommon to need to abandon it, especially for string related ops that make creative use of ISA extensions. And additionally, that Rust unfortunately has other reasons for why std doesn't make as much use of SIMD as it probably should (the core/alloc/std split).

    [1]: https://github.com/BurntSushi/memchr/blob/c6b885b870b6f1b9bf...

    [2]: https://github.com/BurntSushi/memchr/blob/c6b885b870b6f1b9bf...

    [3]: https://github.com/BurntSushi/aho-corasick/blob/f227162f7c56...

  • Ripgrep is faster than {grep, ag, Git grep, ucg, pt, sift}
    14 projects | news.ycombinator.com | 30 Nov 2023
    Oh I see. Yes, that's what is commonly used in academic publications. But I've yet to see it used in the wild.

    I mentioned exactly that paper (I believe) in my write-up on Teddy: https://github.com/BurntSushi/aho-corasick/tree/master/src/p...

  • how to get the index of substring in source string, support unicode in rust.
    1 project | /r/rust | 5 Nov 2023
    The byte offset (or equivalently in this case, the UTF-8 code unit offset) is almost certainly what you want. See: https://github.com/BurntSushi/aho-corasick/issues/72
  • Aho Corasick Algorithm For Efficient String Matching (Python &amp; Golang Code Examples)
    1 project | /r/programming | 6 Oct 2023
    This is an implementation of the algorithm in Rust as well if someone is curious. Though this code is written for production and not teaching.
  • When counting lines in Ruby randomly failed our deployments
    4 projects | /r/ruby | 22 Sep 2023
    A similar fix for the aho-corasick Rust crate was made in response
  • Aho-corasick (and the regex crate) now uses SIMD on aarch64
    2 projects | news.ycombinator.com | 18 Sep 2023
    Teddy is a SIMD accelerated multiple substring matching algorithm. There's a nice description of Teddy here: https://github.com/BurntSushi/aho-corasick/tree/f9d633f970bb...

    It's used in the aho-corasick and regex crates. It now supports SIMD acceleration on aarch64 (including Apple's M1 and M2). There are some nice benchmarks included in the PR demonstrating 2-10x speedups for some searches!

  • Stringzilla: Fastest string sort, search, split, and shuffle using SIMD
    9 projects | news.ycombinator.com | 29 Aug 2023
  • ripgrep is faster than {grep, ag, git grep, ucg, pt, sift}
    8 projects | /r/programming | 24 Mar 2023
    Even putting aside all of that, it might be really hard to add some of the improvements ripgrep has to their engine. The single substring search is probably the lowest hanging fruit, because you can probably isolate that code path pretty well. The multi-substring search is next, but the algorithm is very complicated and not formally described anywhere. The best description of it, Teddy, is probably my own. (I did not invent it.)

What are some alternatives?

When comparing rebar and aho-corasick you can also consider the following projects:

Rebar3 - Erlang build tool that makes it easy to compile and test Erlang applications and releases.

uwu - fastest text uwuifier in the west

cl-ppcre - Common Lisp regular expression library

ripgrep - ripgrep recursively searches directories for a regex pattern while respecting your gitignore

hypergrep - Recursively search directories for a regex pattern

perf-book - The Rust Performance Book

StringZilla - Up to 10x faster strings for C, C++, Python, Rust, and Swift, leveraging SWAR and SIMD on Arm Neon and x86 AVX2 & AVX-512-capable chips to accelerate search, sort, edit distances, alignment scores, etc 🦖

fzf - :cherry_blossom: A command-line fuzzy finder

moar - Moar is a pager. It's designed to just do the right thing without any configuration.

bat - A cat(1) clone with wings.

fd - A simple, fast and user-friendly alternative to 'find'