bstr
aho-corasick
bstr | aho-corasick | |
---|---|---|
10 | 21 | |
744 | 950 | |
- | - | |
6.7 | 7.2 | |
2 months ago | about 1 month ago | |
Rust | Rust | |
GNU General Public License v3.0 or later | The Unlicense |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bstr
-
We're building a browser when it's supposed to be impossible
Libraries for a lot of this stuff exist (albeit in many cases not very mature yet):
- https://github.com/pop-os/cosmic-text does text layout (which Taffy explicitly considers out of scope)
- https://github.com/AccessKit/accesskit does accessibility
- https://github.com/servo/rust-cssparser does value-agnostic CSS parsing (it will parse the general syntax but leaves value parsing up to the user, meaning you can easily add support for whatever properties you what). Libraries like https://github.com/parcel-bundler/lightningcss implement parsing for the standard css properties.
- There are crates like https://github.com/BurntSushi/bstr and https://docs.rs/wtf8/latest/wtf8/ for working with non-unicode text
We are planning to add a C API to Taffy, but tbh I feel like C is not very good for this kind of modularised approach. You really want to be able to expose complex APIs with enforced type safety and this isn't possible with C.
-
Chunking strings in Elixir: how difficult can it be?
As the author of bstr and also the regex implementation that bstr uses to implement word breaking, it is linear time.
NSFL: https://github.com/BurntSushi/bstr/blob/86947727666d7b21c97e...
-
A byte string library for Rust
OsStr uses WTF-8 on Windows, and just represents the raw underlying bytes on Unix.
Byte strings can be WTF-8. They can be anything. The problem is that there is no real way to (easily) get the underlying WTF-8 bytes of an OsStr on Windows. So there's no free conversion to and from byte strings.
I wrote more about this in the bstr docs (and don't miss the link to os_str_bytes): https://docs.rs/bstr/latest/bstr/#file-paths-and-os-strings
I'd be happy to answer more questions if you have them. :-) https://github.com/BurntSushi/bstr/discussions
-
Where is the `str` struct/primitive defined ? I am learning Rust, so don't shoot please :).
Check out bstr, which does this exact thing for its BString and BStr types.
-
Tips when porting C++ programs to Rust
Currently slated for next Monday: https://github.com/BurntSushi/bstr/issues/40
- bstr 1.0 request for comments
-
Let's Stop Ascribing Meaning to Code Points (2017)
This is just an FYI. I don't mean to say much to your overall point, although, as someone else who has spent a lot of time doing Unicode-y things, I do tend to agree with you. I had a very similar discussion a bit ago.[1]
Putting that aside, at least with respect to grapheme segmentation, it might be a little simpler than you think. But maybe only a little. The unicode-segmentation crate also does word segmentation, which is quite a bit more complicated than grapheme segmentation. For example, you can write a regex to parse graphemes without too much fuss[2]. (Compare that with the word segmentation regex, much to my chagrin.[3]) Once you build the regex, actually using it is basically as simple as running the regex.[4]
Sadly, not all regex engines will be able to parse that regex due to its use of somewhat obscure Unicode properties. But the Rust regex crate can. :-)
And of course, this somewhat shifts code size to heap size. So there's that too. But bottom line is, if you have a nice regex engine available to you, you can whip up a grapheme segmenter pretty quickly. And some regex engines even have grapheme segmentation built in via \X.
[1]: https://github.com/BurntSushi/aho-corasick/issues/72
[2]: https://github.com/BurntSushi/bstr/blob/e38e7a7ca986f9499b30...
[3]: https://github.com/BurntSushi/bstr/blob/e38e7a7ca986f9499b30...
[4]: https://github.com/BurntSushi/bstr/blob/e38e7a7ca986f9499b30...
-
os_str_bytes now has string types!
This is a great idea. I realize the find implementation is not ideal and have considered bringing in an optional dependency to improve performance. I remembered bstr using two-way search, so I was wondering if depending on the full crate for searching would be worthwhile, but I see that changed. Thanks for the tip!
-
What you don't like about Rust?
Fun little nit-pick that does not detract from your overall point: you can actually count graphemes with a regex and that's exactly what bstr does. :-)
aho-corasick
- Aho-Corasick Algorithm
-
Identifying Rust's collect:<Vec<_>>() memory leak footgun
You can't build the contiguous variant directly from a sequence of patterns. You need some kind of intermediate data structure to incrementally build a trie in memory. The contiguous NFA needs to know the complete picture of each state in order to compress it into memory. It makes decisions like, "if the number of transitions of this state is less than N, then use this representation" or "use the most significant N bits of the state pointer to indicate its representation." It is difficult to do this in an online fashion, and likely impossible to do without some sort of compromise. For example, you don't know how many transitions each state has until you've completed construction of the trie. But how do you build the trie if the state representation needs to know the number of transitions?
Note that the conversion from a non-contiguous NFA to a contiguous NFA is, relatively speaking, pretty cheap. The only real reason to not use a contiguous NFA is that it can't represent as many patterns as a non-contiguous NFA. (Because of the compression tricks it uses.)
The interesting bits start here: https://github.com/BurntSushi/aho-corasick/blob/f227162f7c56...
-
Ask HN: What's the fastest programming language with a large standard library?
Right. I pointed it out because it isn't just about having portable SIMD that makes SIMD optimizations possible. Therefore, the lack of one in Rust doesn't have much explanatory power for why Rust's standard library doesn't contain SIMD. (It does have some.) It's good enough for things like memchr (well, kinda, NEON doesn't have `movemask`[1,2]), but not for things like Teddy that do multi-substring search. When you do want to write SIMD across platforms, it's not too hard to define your own bespoke portable API[3].
I'm basically just pointing out that a portable API is somewhat oversold, because it's not uncommon to need to abandon it, especially for string related ops that make creative use of ISA extensions. And additionally, that Rust unfortunately has other reasons for why std doesn't make as much use of SIMD as it probably should (the core/alloc/std split).
[1]: https://github.com/BurntSushi/memchr/blob/c6b885b870b6f1b9bf...
[2]: https://github.com/BurntSushi/memchr/blob/c6b885b870b6f1b9bf...
[3]: https://github.com/BurntSushi/aho-corasick/blob/f227162f7c56...
-
Ripgrep is faster than {grep, ag, Git grep, ucg, pt, sift}
Oh I see. Yes, that's what is commonly used in academic publications. But I've yet to see it used in the wild.
I mentioned exactly that paper (I believe) in my write-up on Teddy: https://github.com/BurntSushi/aho-corasick/tree/master/src/p...
-
how to get the index of substring in source string, support unicode in rust.
The byte offset (or equivalently in this case, the UTF-8 code unit offset) is almost certainly what you want. See: https://github.com/BurntSushi/aho-corasick/issues/72
-
Aho Corasick Algorithm For Efficient String Matching (Python & Golang Code Examples)
This is an implementation of the algorithm in Rust as well if someone is curious. Though this code is written for production and not teaching.
-
When counting lines in Ruby randomly failed our deployments
A similar fix for the aho-corasick Rust crate was made in response
-
Aho-corasick (and the regex crate) now uses SIMD on aarch64
Teddy is a SIMD accelerated multiple substring matching algorithm. There's a nice description of Teddy here: https://github.com/BurntSushi/aho-corasick/tree/f9d633f970bb...
It's used in the aho-corasick and regex crates. It now supports SIMD acceleration on aarch64 (including Apple's M1 and M2). There are some nice benchmarks included in the PR demonstrating 2-10x speedups for some searches!
- Stringzilla: Fastest string sort, search, split, and shuffle using SIMD
-
ripgrep is faster than {grep, ag, git grep, ucg, pt, sift}
Even putting aside all of that, it might be really hard to add some of the improvements ripgrep has to their engine. The single substring search is probably the lowest hanging fruit, because you can probably isolate that code path pretty well. The multi-substring search is next, but the algorithm is very complicated and not formally described anywhere. The best description of it, Teddy, is probably my own. (I did not invent it.)
What are some alternatives?
miniserve - 🌟 For when you really just want to serve some files over HTTP right now!
uwu - fastest text uwuifier in the west
tonic - A native gRPC client & server implementation with async/await support.
ripgrep - ripgrep recursively searches directories for a regex pattern while respecting your gitignore
rust-memchr - Optimized string search routines for Rust.
perf-book - The Rust Performance Book
cargo-geiger - Detects usage of unsafe Rust in a Rust crate and its dependencies.
fzf - :cherry_blossom: A command-line fuzzy finder
rust - Empowering everyone to build reliable and efficient software.
bat - A cat(1) clone with wings.
rust-semverver - Automatic checking for semantic versioning in library crates
fd - A simple, fast and user-friendly alternative to 'find'