annoy
marqo
Our great sponsors
annoy | marqo | |
---|---|---|
40 | 114 | |
12,692 | 4,111 | |
1.5% | 3.5% | |
5.3 | 9.3 | |
3 months ago | 7 days ago | |
C++ | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
annoy
-
Do we think about vector dbs wrong?
The focus on the top 10 in vector search is a product of wanting to prove value over keyword search. Keyword search is going to miss some conceptual matches. You can try to work around that with tokenization and complex queries with all variations but it's not easy.
Vector search isn't all that new a concept. For example, the annoy library (https://github.com/spotify/annoy) has been around since 2014. It was one of the first open source approximate nearest neighbor libraries. Recommendations have always been a good use case for vector similarity.
Recommendations are a natural extension of search and transformers models made building the vectors for natural language possible. To prove the worth of vector search over keyword search, the focus was always on showing how the top N matches include results not possible with keyword search.
In 2023, there has been a shift towards acknowledging keyword search also has value and that a combination of vector + keyword search (aka hybrid search) operates in the sweet spot. Once again this is validated through the same benchmarks which focus on the top 10.
On top of all this, there is also the reality that the vector database space is very crowded and some want to use their performance benchmarks for marketing.
Disclaimer: I am the author of txtai (https://github.com/neuml/txtai), an open source embeddings database
-
Vector Databases 101
If you want to go larger you could still use some simple setup in conjunction with faiss, annoy or hnsw.
- I'm an undergraduate data science intern and trying to run kmodes clustering. Did this elbow method to figure out how many clusters to use, but I don't really see an "elbow". Tips on number of clusters?
-
Calculating document similarity in a special domain
I then use annoy to compare them. Annoy can use different measures for distance, like cosine, euclidean and more
-
Can Parquet file format index string columns?
Yes you can do this for equality predicates if your row groups are sorted . This blog post (that I didn't write) might add more color. You can't do this for any kind of text searching. If you need to do this with file based storage I'd recommend using a vector based text search and utilize a ANN index library like Annoy.
-
[D]: Best nearest neighbour search for high dimensions
If you need large scale (1000+ dimension, millions+ source points, >1000 queries per second) and accept imperfect results / approximate nearest neighbors, then other people have already mentioned some of the best libraries (FAISS, Annoy).
- Billion-Scale Approximate Nearest Neighbor Search [pdf]
-
[R] Unlimiformer: Long-Range Transformers with Unlimited Length Input
Would be possible to further speed up the process with using something like ANNOY? https://github.com/spotify/annoy
-
Faiss: A library for efficient similarity search
I like Faiss but I tried Spotify's annoy[1] for a recent project and was pretty impressed.
Since lots of people don't seem to understand how useful these embedding libraries are here's an example. I built a thing that indexes bouldering and climbing competition videos, then builds an embedding of the climber's body position per frame. I then can automatically match different climbers on the same problem.
It works pretty well. Since the body positions are 3D it works reasonably well across camera angles.
The biggest problem is getting the embedding right. I simplified it a lot above because I actually need to embed the problem shape itself because otherwise it matches too well: you get frames of people in identical positions but on different problems!
[1] https://github.com/spotify/annoy
-
How to find "k" nearest embeddings in a space with a very large number of N embeddings (efficiently)?
If you just want quick in memory search then pynndescent is a decent option: it's easy to install, and easy to get running. Another good option is Annoy; it's just as easy to install and get running with python, but it is a little less performant if you want to do a lot of queries, or get a knn-graph quickly.
marqo
-
Are we at peak vector database?
We (Marqo) are doing a lot on 1 and 2. There is a huge amount to be done on the ML side of vector search and we are investing heavily in it. I think it has not quite sunk in that vector search systems are ML systems and everything that comes with that. I would love to chat about 1 and 2 so feel free to email me (email is in my profile). What we have done so far is here -> https://github.com/marqo-ai/marqo
-
Qdrant, the Vector Search Database, raised $28M in a Series A round
Marqo.ai (https://github.com/marqo-ai/marqo) is doing some interesting stuff and is oss. We handle embedding generation as well as retrieval (full disclosure, I work for Marqo.ai)
-
Ask HN: Is there any good semantic search GUI for images or documents?
Take a look here https://github.com/marqo-ai/local-image-search-demo. It is based on https://github.com/marqo-ai/marqo. We do a lot of image search applications. Feel free to reach out if you have other questions (email in profile).
-
90x Faster Than Pgvector – Lantern's HNSW Index Creation Time
That sounds much longer than it should. I am not sure on your exact use-case but I would encourage you to check out Marqo (https://github.com/marqo-ai/marqo - disclaimer, I am a co-founder). All inference and orchestration is included (no api calls) and many open-source or fine-tuned models can be used.
-
Embeddings: What they are and why they matter
Try this https://github.com/marqo-ai/marqo which handles all the chunking for you (and is configurable). Also handles chunking of images in an analogous way. This enables highlighting in longer docs and also for images in a single retrieval step.
-
Choosing vector database: a side-by-side comparison
As others have correctly pointed out, to make a vector search or recommendation application requires a lot more than similarity alone. We have seen the HNSW become commoditised and the real value lies elsewhere. Just because a database has vector functionality doesn’t mean it will actually service anything beyond “hello world” type semantic search applications. IMHO these have questionable value, much like the simple Q and A RAG applications that have proliferated. The elephant in the room with these systems is that if you are relying on machine learning models to produce the vectors you are going to need to invest heavily in the ML components of the system. Domain specific models are a must if you want to be a serious contender to an existing search system and all the usual considerations still apply regarding frequent retraining and monitoring of the models. Currently this is left as an exercise to the reader - and a very large one at that. We (https://github.com/marqo-ai/marqo, I am a co-founder) are investing heavily into making the ML production worthy and continuous learning from feedback of the models as part of the system. Lots of other things to think about in how you represent documents with multiple vectors, multimodality, late interactions, the interplay between embedding quality and HNSW graph quality (i.e. recall) and much more.
- Show HN: Marqo – Vectorless Vector Search
-
AI for AWS Documentation
Marqo provides automatic, configurable chunking (for example with overlap) and can allow you to bring your own model or choose from a wide range of opensource models. I think e5-large would be a good one to try. https://github.com/marqo-ai/marqo
-
[N] Open-source search engine Meilisearch launches vector search
Marqo has a similar API to Meilisearch's standard API but uses vector search in the background: https://github.com/marqo-ai/marqo
-
Ask HN: Which Vector Database do you recommend for LLM applications?
Have you tried Marqo? check the repo : https://github.com/marqo-ai/marqo
What are some alternatives?
faiss - A library for efficient similarity search and clustering of dense vectors.
Weaviate - Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
hnswlib - Header-only C++/python library for fast approximate nearest neighbors
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
implicit - Fast Python Collaborative Filtering for Implicit Feedback Datasets
Milvus - A cloud-native vector database, storage for next generation AI applications
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
TensorRec - A TensorFlow recommendation algorithm and framework in Python.
vault-ai - OP Vault ChatGPT: Give ChatGPT long-term memory using the OP Stack (OpenAI + Pinecone Vector Database). Upload your own custom knowledge base files (PDF, txt, epub, etc) using a simple React frontend.
fastFM - fastFM: A Library for Factorization Machines
marqo - Tensor search for humans. [Moved to: https://github.com/marqo-ai/marqo]