StringZilla VS SimSIMD

Compare StringZilla vs SimSIMD and see what are their differences.

StringZilla

Up to 10x faster strings for C, C++, Python, Rust, and Swift, leveraging SWAR and SIMD on Arm Neon and x86 AVX2 & AVX-512-capable chips to accelerate search, sort, edit distances, alignment scores, etc πŸ¦– (by ashvardanian)

SimSIMD

Up to 200x Faster Inner Products and Vector Similarity β€” for Python, JavaScript, Rust, and C, supporting f64, f32, f16 real & complex, i8, and binary vectors using SIMD for both x86 AVX2 & AVX-512 and Arm NEON & SVE πŸ“ (by ashvardanian)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
StringZilla SimSIMD
14 15
1,811 735
- -
9.8 9.6
15 days ago 1 day ago
C++ C
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

StringZilla

Posts with mentions or reviews of StringZilla. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-27.
  • Measuring energy usage: regular code vs. SIMD code
    1 project | news.ycombinator.com | 19 Feb 2024
    The 3.5x energy-efficiency gap between serial and SIMD code becomes even larger when

    A. you do byte-level processing instead of float words;

    B. you use embedded, IoT, and other low-energy devices.

    A few years ago I've compared Nvidia Jetson Xavier (long before the Orin release), Intel-based MacBook Pro with Core i9, and AVX-512 capable CPUs on substring search benchmarks.

    On Xavier one can quite easily disable/enable cores and reconfigure power usage. At peak I got to 4.2 GB/J which was an 8.3x improvement in inefficiency over LibC in substring search operations. The comparison table is still available in the older README: https://github.com/ashvardanian/StringZilla/tree/v2.0.2?tab=...

  • Show HN: StringZilla v3 with C++, Rust, and Swift bindings, and AVX-512 and NEON
    1 project | news.ycombinator.com | 7 Feb 2024
  • How fast is rolling Karp-Rabin hashing?
    1 project | news.ycombinator.com | 4 Feb 2024
    This is extremely timely! I was working on SIMD variants for collision-resistant rolling-hash variants in the last few weeks for the v3 release of the StringZilla library [1].

    I have tried several 4-way and 8-way parallel variants using AVX-512 DQ instructions for 64-bit integer multiplications [2] as well as using integer FMA instructions on Arm NEON with 32-bit multiplications [3]. The latter needs a better mixing approach to be collision-resistant.

    So far I couldn't exceed 1 GB/s/core [4], so more research is needed. If you have any ideas - I am all ears!

    [1]: https://github.com/ashvardanian/StringZilla/blob/bc1869a8529...

    [2]: https://github.com/ashvardanian/StringZilla/blob/bc1869a8529...

    [3]: https://github.com/ashvardanian/StringZilla/blob/bc1869a8529...

    [4]: https://github.com/ashvardanian/StringZilla/tree/main-dev?ta...

  • 4B If Statements
    5 projects | news.ycombinator.com | 27 Dec 2023
    Jokes aside, lookup tables are a common technique to avoid costly operations. I was recently implementing one to avoid integer division. In my case I knew that the nominator and denominator were 8 bit unsigned integers, so I've replaced the division with 2 table lookups and 6 shifts and arithmetic operations [1]. The well known `libdivide` [2] does that for arbitrary 16, 32, and 64 bit integers, and it has precomputed magic numbers and lookup tables for all 16-bit integers in the same repo.

    [1]: https://github.com/ashvardanian/StringZilla/blob/9f6ca3c6d3c...

  • Python, C, Assembly – Faster Cosine Similarity
    5 projects | news.ycombinator.com | 18 Dec 2023
    That matches my experience, and goes beyond GCC and Clang. Between 2018 and 2020 I was giving a lot of lectures on this topic and we did a bunch of case studies with Intel on their older ICC and what later became the OneAPI.

    Short story, unless you are doing trivial data-parallel operations, like in SimSIMD, compilers are practically useless. As a proof, I wrote what is now the StringZilla library (https://github.com/ashvardanian/stringzilla) and we've spent weeks with an Intel team, tuning the compiler, no result. So if you are processing a lot of strings, or variable-length coded data, like compression/decompression, hand-written SIMD kernels are pretty much unbeatable.

  • Stringzilla: 10x Faster SIMD-accelerated String Class
    1 project | /r/programming | 30 Aug 2023
  • Stringzilla: 10x faster SIMD-accelerated Python `str` class
    2 projects | /r/Python | 30 Aug 2023
    Blog post
  • Stringzilla: Fastest string sort, search, split, and shuffle using SIMD
    9 projects | news.ycombinator.com | 29 Aug 2023
    Copying my feedback from reddit[1], where I discussed it in the context of the `memchr` crate.[2]

    I took a quick look at your library implementation and have some notes:

    * It doesn't appear to query CPUID, so I imagine the only way it uses AVX2 on x86-64 is if the user compiles with that feature enabled explicitly. (Or uses something like [`x86-64-v3`](https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...).) The `memchr` crate doesn't need that. It will use AVX2 even if the program isn't compiled with AVX2 enabled so long as the current CPU supports it.

    * Your substring routines have multiplicative worst case (that is, `O(m * n)`) running time. The `memchr` crate only uses SIMD for substring search for smallish needles. Otherwise it flips over to Two-Way with a SIMD prefilter. You'll be fine for short needles, but things could go very very badly for longer needles.

    * It seems quite likely that your [confirmation step](https://github.com/ashvardanian/Stringzilla/blob/fab854dc4fd...) is going to absolutely kill performance for even semi-frequently occurring candidates. The [`memchr` crate utilizes information from the vector step to limit where and when it calls `memcmp`](https://github.com/BurntSushi/memchr/blob/46620054ff25b16d22...). Your code might do well in cases where matches are very rare. I took a quick peek at your benchmarks and don't see anything that obviously stresses this particular case. For substring search, the `memchr` crate uses a variant of the "[generic SIMD](http://0x80.pl/articles/simd-strfind.html#first-and-last)" algorithm. Basically, it takes two bytes from the needle, looks for positions where those occur and then attempts to check whether that position corresponds to a match. It looks like your technique uses the first 4 bytes. I suspect that might be overkill. (I did try using 3 bytes from the needle and found that it was a bit slower in some cases.) That is, two bytes is usually enough predictive power to lower the false positive rate enough. Of course, one can write pathological inputs that cause either one to do better than the other. (The `memchr` crat benchmark suite has a [collection of pathological inputs](https://github.com/BurntSushi/memchr/blob/46620054ff25b16d22...).)

    It would actually be possible to hook Stringzilla up to `memchr`'s benchmark suite if you were interested. :-)

    [1]: https://old.reddit.com/r/rust/comments/163ph8r/memchr_26_now...

    [2]: https://github.com/BurntSushi/memchr

  • Show HN: Faking SIMD to Search and Sort Strings 5x Faster
    1 project | news.ycombinator.com | 26 Aug 2023
    I took a look at Stringzilla (https://github.com/ashvardanian/stringzilla), and in addition to the impressive benchmarks, the API looks pretty straightforward. It's a new star in my collection!

    Thanks for open-sourcing this project!

SimSIMD

Posts with mentions or reviews of SimSIMD. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-28.
  • Deep Learning in JavaScript
    11 projects | news.ycombinator.com | 28 Mar 2024
  • From slow to SIMD: A Go optimization story
    10 projects | news.ycombinator.com | 23 Jan 2024
    For other languages (including nodejs/bun/rust/python etc) you can have a look at SimSIMD which I have contributed to this year (made recompiled binaries for nodejs/bun part of the build process for x86_64 and arm64 on Mac and Linux, x86 and x86_64 on windows).

    [0] https://github.com/ashvardanian/SimSIMD

  • Python, C, Assembly – Faster Cosine Similarity
    5 projects | news.ycombinator.com | 18 Dec 2023
    Kahan floats are also commonly used in such cases, but I believe there is room for improvement without hitting those extremes. First of all, we should tune the epsilon here: https://github.com/ashvardanian/SimSIMD/blob/f8ff727dcddcd14...

    As for the 64-bit version, its harder, as the higher-precision `rsqrt` approximations are only available with "AVX512ER". I'm not sure which CPUs support that, but its not available on Sapphire Rapids.

  • Beating GCC 12 - 118x Speedup for Jensen Shannon Divergence via AVX-512FP16
    1 project | /r/programming | 26 Oct 2023
  • Show HN: Beating GCC 12 – 118x Speedup for Jensen Shannon D. Via AVX-512FP16
    1 project | news.ycombinator.com | 24 Oct 2023
  • SimSIMD v2: Vector Similarity Functions 3x-200x Faster than SciPy and NumPy
    1 project | /r/programming | 7 Oct 2023
  • Show HN: SimSIMD vs. SciPy: How AVX-512 and SVE make SIMD cleaner and ML faster
    16 projects | news.ycombinator.com | 7 Oct 2023
    I encourage one to merge into e.g. {NumPy, SciPy, }; are there PRs?

    Though SymPy.physics only yet supports X,Y,Z vectors and doesn't mention e.g. "jaccard"?, FWIW: https://docs.sympy.org/latest/modules/physics/vector/vectors... https://docs.sympy.org/latest/modules/physics/vector/fields.... #cfd

    include/simsimd/simsimd.h: https://github.com/ashvardanian/SimSIMD/blob/main/include/si...

    conda-forge maintainer docs > Switching BLAS implementation:

  • SimSIMD v2: 3-200x Faster Vector Similarity Functions than SciPy and NumPy
    1 project | /r/Python | 7 Oct 2023
    Hello, everybody! I was working on the next major release of USearch, and in the process, I decided to generalize its underlying library - SimSIMD. It does one very simple job but does it well - computing distances and similarities between high-dimensional embeddings standard in modern AI workloads.
  • Comparing Vectors 3-200x Faster than SciPy and NumPy
    1 project | /r/Python | 7 Oct 2023
  • Show HN: U)Search Images demo in 200 lines of Python
    3 projects | news.ycombinator.com | 7 Sep 2023
    Hey everyone! I am excited to share updates on four of my & my teams' open-source projects that take large-scale search systems to the next level: USearch, UForm, UCall, and StringZilla. These projects are designed to work seamlessly together, end-to-endβ€”covering everything from indexing and AI to storage and networking. And yeah, they're optimized for x86 AVX2/512 and Arm NEON/SVE hardware.

    USearch [1]: Think of it as Meta FAISS on steroids. It's now quicker, supports clustering of any granularity, and offers multi-index lookups. Plus, it's got more native bindings than probably all other vector search engines combined: C++, C, Python, Java, JavaScript, Rust, Obj-C, Swift, C#, GoLang, and even slightly outdated bindings for Wolfram. Need to refresh that last one!

    UForm v2 [2]: Imagine a much smaller OpenAI CLIP but more efficient and trained on balanced multilingual datasets, with equal exposure to languages from English, Chinese, and Hindi to Arabic, Hebrew, and Armenian. UForm now supports 21 languages, is so tiny that you can run it in the browser, and outputs small 256-dimensional embeddings. Perfect for rapid image and video searches. It's already available on Hugging-Face as "unum-cloud/uform-vl-multilingual-v2".

    UCall [3]: It started as a FastAPI alternative focusing on JSON-RPC (instead of REST protocols), offering 70x the bandwidth and 1/50th the latency. It was good but not enough, so we've added REST and TLS support, broadening its appeal. I've merged that code, and it is yet to be tested. Early benchmarks suggest that we still hit the same 150'000-250'000 requests/s on a single CPU core in Python by reusing HTTPS connections.

    StringZilla [4]: This project lets you sift through multi-gigabyte or terabyte strings with minimal use of RAM and maximal use of SIMD and SWAR techniques.

    All these projects are engineered for scalability and efficiency, even on tight budgets. Our demo, for instance, works on hundreds of gigabytes of images using just a few gigabytes of RAM and no GPUs for AI inference. That is a toy example with a small, noisy dataset, and I look forward to showing a much larger setup. Interestingly, even this tiny setup illustrates issues common to UForm and much larger OpenAI CLIP models - the quality of Multi-Modal alignment [5]. It also shows how different/accurate the search results are across different languages. Synthetic benchmarks suggest massive improvements for some low-resource languages (like Armenian and Hebrew) and more popular ones (like Hindi and Arabic) [6]. Still, when we look at visual demos like this, I can see a long road ahead for us and the broader industry, making LLMs Multi-Modal in 2024 :)

    All of the projects and the demo code are available under an Apache license, so feel free to use them in your commercial projects :)

    PS: The demo looks much nicer with just Unsplash dataset of 25'000 images, but it's less representative of modern AI datasets, too small, and may not be the best way to honestly show our current weaknesses. The second dataset - Conceptual Captions - is much noisier, and quite ugly.

    [1]: https://github.com/unum-cloud/usearch

What are some alternatives?

When comparing StringZilla and SimSIMD you can also consider the following projects:

usearch - Fast Open-Source Search & Clustering engine Γ— for Vectors & πŸ”œ Strings Γ— in C++, C, Python, JavaScript, Rust, Java, Objective-C, Swift, C#, GoLang, and Wolfram πŸ”

kuzu - Embeddable property graph database management system built for query speed and scalability. Implements Cypher.

Simd - C++ image processing and machine learning library with using of SIMD: SSE, AVX, AVX-512, AMX for x86/x64, VMX(Altivec) and VSX(Power7) for PowerPC, NEON for ARM.

nsimd - Agenium Scale vectorization library for CPUs and GPUs

aho-corasick - A fast implementation of Aho-Corasick in Rust.

numpy-feedstock - A conda-smithy repository for numpy.

rust-memchr - Optimized string search routines for Rust.

mkl_random-feedstock - A conda-smithy repository for mkl_random.

popular-baby-names - 1, 000 most popular names for baby boys and girls in CSV and JSON formats. Generator written in Python.

rebar - A biased barometer for gauging the relative speed of some regex engines on a curated set of tasks.

xtensor-fftw - FFTW bindings for the xtensor C++14 multi-dimensional array library