TensorRec
annoy
Our great sponsors
- Revelo Payroll - Free Global Payroll designed for tech teams
- InfluxDB - Collect and Analyze Billions of Data Points in Real Time
- Onboard AI - Learn any GitHub repo in 59 seconds
- Sonar - Write Clean Python Code. Always.
TensorRec | annoy | |
---|---|---|
0 | 40 | |
1,241 | 11,945 | |
- | 0.9% | |
0.0 | 1.8 | |
4 months ago | about 1 month ago | |
Python | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TensorRec
We haven't tracked posts mentioning TensorRec yet.
Tracking mentions began in Dec 2020.
annoy
-
Do we think about vector dbs wrong?
The focus on the top 10 in vector search is a product of wanting to prove value over keyword search. Keyword search is going to miss some conceptual matches. You can try to work around that with tokenization and complex queries with all variations but it's not easy.
Vector search isn't all that new a concept. For example, the annoy library (https://github.com/spotify/annoy) has been around since 2014. It was one of the first open source approximate nearest neighbor libraries. Recommendations have always been a good use case for vector similarity.
Recommendations are a natural extension of search and transformers models made building the vectors for natural language possible. To prove the worth of vector search over keyword search, the focus was always on showing how the top N matches include results not possible with keyword search.
In 2023, there has been a shift towards acknowledging keyword search also has value and that a combination of vector + keyword search (aka hybrid search) operates in the sweet spot. Once again this is validated through the same benchmarks which focus on the top 10.
On top of all this, there is also the reality that the vector database space is very crowded and some want to use their performance benchmarks for marketing.
Disclaimer: I am the author of txtai (https://github.com/neuml/txtai), an open source embeddings database
-
Vector Databases 101
If you want to go larger you could still use some simple setup in conjunction with faiss, annoy or hnsw.
- I'm an undergraduate data science intern and trying to run kmodes clustering. Did this elbow method to figure out how many clusters to use, but I don't really see an "elbow". Tips on number of clusters?
-
[D]: Best nearest neighbour search for high dimensions
If you need large scale (1000+ dimension, millions+ source points, >1000 queries per second) and accept imperfect results / approximate nearest neighbors, then other people have already mentioned some of the best libraries (FAISS, Annoy).
-
Faiss: A library for efficient similarity search
I like Faiss but I tried Spotify's annoy[1] for a recent project and was pretty impressed.
Since lots of people don't seem to understand how useful these embedding libraries are here's an example. I built a thing that indexes bouldering and climbing competition videos, then builds an embedding of the climber's body position per frame. I then can automatically match different climbers on the same problem.
It works pretty well. Since the body positions are 3D it works reasonably well across camera angles.
The biggest problem is getting the embedding right. I simplified it a lot above because I actually need to embed the problem shape itself because otherwise it matches too well: you get frames of people in identical positions but on different problems!
-
How to find "k" nearest embeddings in a space with a very large number of N embeddings (efficiently)?
If you just want quick in memory search then pynndescent is a decent option: it's easy to install, and easy to get running. Another good option is Annoy; it's just as easy to install and get running with python, but it is a little less performant if you want to do a lot of queries, or get a knn-graph quickly.
- [D] Algorithms for efficiently computing the approximate nearest neighbour from a large bag of elements
-
[Discussion] NLP for products matching
Probably I won't be bale to explain better than it's stated on annoy page: https://github.com/spotify/annoy But the bottom line is speed. Instead of computing similarities of embeddings one by one you do it via index that works way faster.
-
Do i really need a vector database
Perhaps you can store your embeddings anywhere (sql or even a file) and use Approximate Nearest Neighbors like https://github.com/spotify/annoy for comparison?
What are some alternatives?
faiss - A library for efficient similarity search and clustering of dense vectors.
hnswlib - Header-only C++/python library for fast approximate nearest neighbors
implicit - Fast Python Collaborative Filtering for Implicit Feedback Datasets
Milvus - A cloud-native vector database, storage for next generation AI applications
fastFM - fastFM: A Library for Factorization Machines
spotlight - Deep recommender models using PyTorch.
awesome-vector-search - Collections of vector search related libraries, service and research papers
libffm - A Library for Field-aware Factorization Machines