Stringzilla: Fastest string sort, search, split, and shuffle using SIMD

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • StringZilla

    Up to 10x faster strings for C, C++, Python, Rust, and Swift, leveraging SWAR and SIMD on Arm Neon and x86 AVX2 & AVX-512-capable chips to accelerate search, sort, edit distances, alignment scores, etc 🦖

  • Copying my feedback from reddit[1], where I discussed it in the context of the `memchr` crate.[2]

    I took a quick look at your library implementation and have some notes:

    * It doesn't appear to query CPUID, so I imagine the only way it uses AVX2 on x86-64 is if the user compiles with that feature enabled explicitly. (Or uses something like [`x86-64-v3`](https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...).) The `memchr` crate doesn't need that. It will use AVX2 even if the program isn't compiled with AVX2 enabled so long as the current CPU supports it.

    * Your substring routines have multiplicative worst case (that is, `O(m * n)`) running time. The `memchr` crate only uses SIMD for substring search for smallish needles. Otherwise it flips over to Two-Way with a SIMD prefilter. You'll be fine for short needles, but things could go very very badly for longer needles.

    * It seems quite likely that your [confirmation step](https://github.com/ashvardanian/Stringzilla/blob/fab854dc4fd...) is going to absolutely kill performance for even semi-frequently occurring candidates. The [`memchr` crate utilizes information from the vector step to limit where and when it calls `memcmp`](https://github.com/BurntSushi/memchr/blob/46620054ff25b16d22...). Your code might do well in cases where matches are very rare. I took a quick peek at your benchmarks and don't see anything that obviously stresses this particular case. For substring search, the `memchr` crate uses a variant of the "[generic SIMD](http://0x80.pl/articles/simd-strfind.html#first-and-last)" algorithm. Basically, it takes two bytes from the needle, looks for positions where those occur and then attempts to check whether that position corresponds to a match. It looks like your technique uses the first 4 bytes. I suspect that might be overkill. (I did try using 3 bytes from the needle and found that it was a bit slower in some cases.) That is, two bytes is usually enough predictive power to lower the false positive rate enough. Of course, one can write pathological inputs that cause either one to do better than the other. (The `memchr` crat benchmark suite has a [collection of pathological inputs](https://github.com/BurntSushi/memchr/blob/46620054ff25b16d22...).)

    It would actually be possible to hook Stringzilla up to `memchr`'s benchmark suite if you were interested. :-)

    [1]: https://old.reddit.com/r/rust/comments/163ph8r/memchr_26_now...

    [2]: https://github.com/BurntSushi/memchr

  • rust-memchr

    Optimized string search routines for Rust.

  • Copying my feedback from reddit[1], where I discussed it in the context of the `memchr` crate.[2]

    I took a quick look at your library implementation and have some notes:

    * It doesn't appear to query CPUID, so I imagine the only way it uses AVX2 on x86-64 is if the user compiles with that feature enabled explicitly. (Or uses something like [`x86-64-v3`](https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...).) The `memchr` crate doesn't need that. It will use AVX2 even if the program isn't compiled with AVX2 enabled so long as the current CPU supports it.

    * Your substring routines have multiplicative worst case (that is, `O(m * n)`) running time. The `memchr` crate only uses SIMD for substring search for smallish needles. Otherwise it flips over to Two-Way with a SIMD prefilter. You'll be fine for short needles, but things could go very very badly for longer needles.

    * It seems quite likely that your [confirmation step](https://github.com/ashvardanian/Stringzilla/blob/fab854dc4fd...) is going to absolutely kill performance for even semi-frequently occurring candidates. The [`memchr` crate utilizes information from the vector step to limit where and when it calls `memcmp`](https://github.com/BurntSushi/memchr/blob/46620054ff25b16d22...). Your code might do well in cases where matches are very rare. I took a quick peek at your benchmarks and don't see anything that obviously stresses this particular case. For substring search, the `memchr` crate uses a variant of the "[generic SIMD](http://0x80.pl/articles/simd-strfind.html#first-and-last)" algorithm. Basically, it takes two bytes from the needle, looks for positions where those occur and then attempts to check whether that position corresponds to a match. It looks like your technique uses the first 4 bytes. I suspect that might be overkill. (I did try using 3 bytes from the needle and found that it was a bit slower in some cases.) That is, two bytes is usually enough predictive power to lower the false positive rate enough. Of course, one can write pathological inputs that cause either one to do better than the other. (The `memchr` crat benchmark suite has a [collection of pathological inputs](https://github.com/BurntSushi/memchr/blob/46620054ff25b16d22...).)

    It would actually be possible to hook Stringzilla up to `memchr`'s benchmark suite if you were interested. :-)

    [1]: https://old.reddit.com/r/rust/comments/163ph8r/memchr_26_now...

    [2]: https://github.com/BurntSushi/memchr

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • usearch

    Fast Open-Source Search & Clustering engine × for Vectors & 🔜 Strings × in C++, C, Python, JavaScript, Rust, Java, Objective-C, Swift, C#, GoLang, and Wolfram 🔍

  • > It doesn't appear to query CPUID

    Yes, I'm actually looking for a good way to do it for other projects as well. I've looked into a couple more libs, and here is the best I've come up with so far: https://github.com/unum-cloud/usearch/blob/f942b6f334b31716f...

    > Your substring routines have multiplicative worst case

    Yes, that is true. It's a very simple stupid trick, just happens to work well for me :)

    > It seems quite likely that your confirmation step

    We have a different library internally at Unum, that avoids this shortcoming. It has a few thousand lines of C++ templates with SIMD intrinsics... and it's definitely more efficient, but the margins aren't always high. So I kept the pure C version with inlined functions as minimal and simple as possible.

    > It would actually be possible to hook Stringzilla up to `memchr`'s benchmark suite if you were interested. :-)

    Yes, that would be a fun thing to do! I haven't had time to look into `memchr` yet, but would expect great perf from your lib as well. For me the State of the Art is Intel HyperScan. Probably the most advanced SIMD library overall, not just for strings. I was very impressed with their perf ~5 years ago. But the repo is 200 K LOC... So get ready to invest a weekend :)

    That said, I'm a bit slammed with work right now, including open-source. Hoping to ship a new major release in UCall this week, and a minor one in USearch :)

  • rebar

    A biased barometer for gauging the relative speed of some regex engines on a curated set of tasks.

  • aho-corasick

    A fast implementation of Aho-Corasick in Rust.

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • vu128: Efficient variable-length integers

    2 projects | news.ycombinator.com | 23 May 2024
  • Regular Expression Matching Can Be Simple and Fast (2007)

    3 projects | news.ycombinator.com | 21 May 2024
  • USearch SQLite Extensions for Vector and Text Search

    1 project | news.ycombinator.com | 22 Feb 2024
  • Identifying Rust's collect:<Vec<_>>() memory leak footgun

    3 projects | news.ycombinator.com | 18 Jan 2024
  • Ask HN: What is the state of art approximate k-NN search algorithm today?

    1 project | news.ycombinator.com | 17 Jan 2024