annoy VS google-research

Compare annoy vs google-research and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
annoy google-research
40 98
12,692 32,804
1.5% 1.5%
5.3 9.6
3 months ago 1 day ago
C++ Jupyter Notebook
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

annoy

Posts with mentions or reviews of annoy. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-05.
  • Do we think about vector dbs wrong?
    7 projects | news.ycombinator.com | 5 Sep 2023
    The focus on the top 10 in vector search is a product of wanting to prove value over keyword search. Keyword search is going to miss some conceptual matches. You can try to work around that with tokenization and complex queries with all variations but it's not easy.

    Vector search isn't all that new a concept. For example, the annoy library (https://github.com/spotify/annoy) has been around since 2014. It was one of the first open source approximate nearest neighbor libraries. Recommendations have always been a good use case for vector similarity.

    Recommendations are a natural extension of search and transformers models made building the vectors for natural language possible. To prove the worth of vector search over keyword search, the focus was always on showing how the top N matches include results not possible with keyword search.

    In 2023, there has been a shift towards acknowledging keyword search also has value and that a combination of vector + keyword search (aka hybrid search) operates in the sweet spot. Once again this is validated through the same benchmarks which focus on the top 10.

    On top of all this, there is also the reality that the vector database space is very crowded and some want to use their performance benchmarks for marketing.

    Disclaimer: I am the author of txtai (https://github.com/neuml/txtai), an open source embeddings database

  • Vector Databases 101
    3 projects | /r/datascience | 25 Jun 2023
    If you want to go larger you could still use some simple setup in conjunction with faiss, annoy or hnsw.
  • I'm an undergraduate data science intern and trying to run kmodes clustering. Did this elbow method to figure out how many clusters to use, but I don't really see an "elbow". Tips on number of clusters?
    2 projects | /r/datascience | 21 Jun 2023
  • Calculating document similarity in a special domain
    1 project | /r/LanguageTechnology | 1 Jun 2023
    I then use annoy to compare them. Annoy can use different measures for distance, like cosine, euclidean and more
  • Can Parquet file format index string columns?
    1 project | /r/dataengineering | 27 May 2023
    Yes you can do this for equality predicates if your row groups are sorted . This blog post (that I didn't write) might add more color. You can't do this for any kind of text searching. If you need to do this with file based storage I'd recommend using a vector based text search and utilize a ANN index library like Annoy.
  • [D]: Best nearest neighbour search for high dimensions
    4 projects | /r/MachineLearning | 17 May 2023
    If you need large scale (1000+ dimension, millions+ source points, >1000 queries per second) and accept imperfect results / approximate nearest neighbors, then other people have already mentioned some of the best libraries (FAISS, Annoy).
  • Billion-Scale Approximate Nearest Neighbor Search [pdf]
    1 project | news.ycombinator.com | 6 May 2023
  • [R] Unlimiformer: Long-Range Transformers with Unlimited Length Input
    1 project | /r/MachineLearning | 5 May 2023
    Would be possible to further speed up the process with using something like ANNOY? https://github.com/spotify/annoy
  • Faiss: A library for efficient similarity search
    14 projects | news.ycombinator.com | 30 Mar 2023
    I like Faiss but I tried Spotify's annoy[1] for a recent project and was pretty impressed.

    Since lots of people don't seem to understand how useful these embedding libraries are here's an example. I built a thing that indexes bouldering and climbing competition videos, then builds an embedding of the climber's body position per frame. I then can automatically match different climbers on the same problem.

    It works pretty well. Since the body positions are 3D it works reasonably well across camera angles.

    The biggest problem is getting the embedding right. I simplified it a lot above because I actually need to embed the problem shape itself because otherwise it matches too well: you get frames of people in identical positions but on different problems!

    [1] https://github.com/spotify/annoy

  • How to find "k" nearest embeddings in a space with a very large number of N embeddings (efficiently)?
    3 projects | /r/MLQuestions | 23 Feb 2023
    If you just want quick in memory search then pynndescent is a decent option: it's easy to install, and easy to get running. Another good option is Annoy; it's just as easy to install and get running with python, but it is a little less performant if you want to do a lot of queries, or get a knn-graph quickly.

google-research

Posts with mentions or reviews of google-research. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-10.
  • Show HN: Next-token prediction in JavaScript – build fast LLMs from scratch
    11 projects | news.ycombinator.com | 10 Apr 2024
    People on here will be happy to say that I do a similar thing, however my sequence length is dynamic because I also use a 2nd data structure - I'll use pretentious academic speak: I use a simple bigram LM (2-gram) for single next-word likeliness and separately a trie that models all words and phrases (so, n-gram). Not sure how many total nodes because sentence lengths vary in training data, but there are about 200,000 entry points (keys) so probably about 2-10 million total nodes in the default setup.

    "Constructing 7-gram LM": They likely started with bigrams (what I use) which only tells you the next word based on 1 word given, and thought to increase accuracy by modeling out more words in a sequence, and eventually let the user (developer) pass in any amount they want to model (https://github.com/google-research/google-research/blob/5c87...). I thought of this too at first, but I actually got more accuracy (and speed) out of just keeping them as bigrams and making a totally separate structure that models out an n-gram of all phrases (e.g. could be a 24-token long sequence or 100+ tokens etc. I model it all) and if that phrase is found, then I just get the bigram assumption of the last token of the phrase. This works better when the training data is more diverse (for a very generic model), but theirs would probably outperform mine on accuracy when the training data has a lot of nearly identical sentences that only change wildly toward the end - I don't find this pattern in typical data though, maybe for certain coding and other tasks there are those patterns though. But because it's not dynamic and they make you provide that number, even a low number (any phrase longer than 2 words) - theirs will always have to do more lookup work than with simple bigrams and they're also limited by that fixed number as far as accuracy. I wonder how scalable that is - if I need to train on occasional ~100-word long sentences but also (and mostly) just ~3-word long sentences, I guess I set this to 100 and have a mostly "undefined" trie.

    I also thought of the name "LMJS", theirs is "jslm" :) but I went with simply "next-token-prediction" because that's what it ultimately does as a library. I don't know what theirs is really designed for other than proving a concept. Most of their code files are actually comments and hypothetical scenarios.

    I recently added a browser example showing simple autocomplete using my library: https://github.com/bennyschmidt/next-token-prediction/tree/m... (video)

    And next I'm implementing 8-dimensional embeddings that are converted to normalized vectors between 0-1 to see if doing math on them does anything useful beyond similarity, right now they look like this:

      [nextFrequency, prevalence, specificity, length, firstLetter, lastLetter, firstVowel, lastVowel]
  • Google Research website is down
    1 project | news.ycombinator.com | 5 Apr 2024
  • Jpegli: A New JPEG Coding Library
    9 projects | news.ycombinator.com | 3 Apr 2024
    The change was literally just made: https://github.com/google-research/google-research/commit/4a...

    It appears this was in response to Hacker News comments.

  • Multi-bitrate JPEG compression perceptual evaluation dataset 2023
    1 project | news.ycombinator.com | 31 Jan 2024
  • Vector Databases: A Technical Primer [pdf]
    7 projects | news.ycombinator.com | 12 Jan 2024
    There are options such as Google's ScaNN that may let you go farther before needing to consider specialized databases.

    https://github.com/google-research/google-research/blob/mast...

  • Labs.Google
    1 project | news.ycombinator.com | 22 Dec 2023
    I feel it was unnecesary to create this because https://research.google/ already exists? It just seems like they want to take another URL with a "pure" domain name instead of psubdirectories, etc parts.
  • Smerf: Streamable Memory Efficient Radiance Fields
    3 projects | news.ycombinator.com | 13 Dec 2023
    https://github.com/google-research/google-research/blob/mast...
  • Shisa 7B: a new JA/EN bilingual model based on Mistral 7B
    2 projects | /r/LocalLLaMA | 7 Dec 2023
    You could also try some dedicated translation models like https://huggingface.co/facebook/nllb-moe-54b (or https://github.com/google-research/google-research/tree/master/madlad_400 for something smaller) and see how they do.
  • Translate to and from 400+ languages locally with MADLAD-400
    1 project | /r/LocalLLaMA | 10 Nov 2023
    Google released T5X checkpoints for MADLAD-400 a couple of months ago, but nobody could figure out how to run them. Turns out the vocabulary was wrong, but they uploaded the correct one last week.
  • Mastering ROUGE Matrix: Your Guide to Large Language Model Evaluation for Summarization with Examples
    2 projects | dev.to | 8 Oct 2023

What are some alternatives?

When comparing annoy and google-research you can also consider the following projects:

faiss - A library for efficient similarity search and clustering of dense vectors.

qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/

hnswlib - Header-only C++/python library for fast approximate nearest neighbors

fast-soft-sort - Fast Differentiable Sorting and Ranking

implicit - Fast Python Collaborative Filtering for Implicit Feedback Datasets

Milvus - A cloud-native vector database, storage for next generation AI applications

ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

TensorRec - A TensorFlow recommendation algorithm and framework in Python.

fastFM - fastFM: A Library for Factorization Machines

struct2depth - Models and examples built with TensorFlow