marqo
faiss
marqo | faiss | |
---|---|---|
114 | 71 | |
4,124 | 28,202 | |
1.6% | 1.9% | |
9.3 | 9.4 | |
5 days ago | 6 days ago | |
Python | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
marqo
-
Are we at peak vector database?
We (Marqo) are doing a lot on 1 and 2. There is a huge amount to be done on the ML side of vector search and we are investing heavily in it. I think it has not quite sunk in that vector search systems are ML systems and everything that comes with that. I would love to chat about 1 and 2 so feel free to email me (email is in my profile). What we have done so far is here -> https://github.com/marqo-ai/marqo
-
Qdrant, the Vector Search Database, raised $28M in a Series A round
Marqo.ai (https://github.com/marqo-ai/marqo) is doing some interesting stuff and is oss. We handle embedding generation as well as retrieval (full disclosure, I work for Marqo.ai)
-
Ask HN: Is there any good semantic search GUI for images or documents?
Take a look here https://github.com/marqo-ai/local-image-search-demo. It is based on https://github.com/marqo-ai/marqo. We do a lot of image search applications. Feel free to reach out if you have other questions (email in profile).
-
90x Faster Than Pgvector – Lantern's HNSW Index Creation Time
That sounds much longer than it should. I am not sure on your exact use-case but I would encourage you to check out Marqo (https://github.com/marqo-ai/marqo - disclaimer, I am a co-founder). All inference and orchestration is included (no api calls) and many open-source or fine-tuned models can be used.
-
Embeddings: What they are and why they matter
Try this https://github.com/marqo-ai/marqo which handles all the chunking for you (and is configurable). Also handles chunking of images in an analogous way. This enables highlighting in longer docs and also for images in a single retrieval step.
-
Choosing vector database: a side-by-side comparison
As others have correctly pointed out, to make a vector search or recommendation application requires a lot more than similarity alone. We have seen the HNSW become commoditised and the real value lies elsewhere. Just because a database has vector functionality doesn’t mean it will actually service anything beyond “hello world” type semantic search applications. IMHO these have questionable value, much like the simple Q and A RAG applications that have proliferated. The elephant in the room with these systems is that if you are relying on machine learning models to produce the vectors you are going to need to invest heavily in the ML components of the system. Domain specific models are a must if you want to be a serious contender to an existing search system and all the usual considerations still apply regarding frequent retraining and monitoring of the models. Currently this is left as an exercise to the reader - and a very large one at that. We (https://github.com/marqo-ai/marqo, I am a co-founder) are investing heavily into making the ML production worthy and continuous learning from feedback of the models as part of the system. Lots of other things to think about in how you represent documents with multiple vectors, multimodality, late interactions, the interplay between embedding quality and HNSW graph quality (i.e. recall) and much more.
- Show HN: Marqo – Vectorless Vector Search
-
AI for AWS Documentation
Marqo provides automatic, configurable chunking (for example with overlap) and can allow you to bring your own model or choose from a wide range of opensource models. I think e5-large would be a good one to try. https://github.com/marqo-ai/marqo
-
[N] Open-source search engine Meilisearch launches vector search
Marqo has a similar API to Meilisearch's standard API but uses vector search in the background: https://github.com/marqo-ai/marqo
-
Ask HN: Which Vector Database do you recommend for LLM applications?
Have you tried Marqo? check the repo : https://github.com/marqo-ai/marqo
faiss
-
Haystack DB – 10x faster than FAISS with binary embeddings by default
There are also FAISS binary indexes[0], so it'd be great to compare binary index vs binary index. Otherwise it seems a little misleading to say it is a FAISS vs not FAISS comparison, since really it would be a binary index vs not binary index comparison. I'm not too familiar with binary indexes, so if there's a significant difference between the types of binary index then it'd be great to explain what that is too.
[0] https://github.com/facebookresearch/faiss/wiki/Binary-indexe...
-
Show HN: Chromem-go – Embeddable vector database for Go
Or just use FAISS https://github.com/facebookresearch/faiss
- OpenAI: New embedding models and API updates
-
You Shouldn't Invest in Vector Databases?
You can try txtai (https://github.com/neuml/txtai) with a Faiss backend.
This Faiss wiki article might help (https://github.com/facebookresearch/faiss/wiki/Indexing-1G-v...).
For example, a partial Faiss configuration with 4-bit PQ quantization and only using 5% of the data to train an IVF index is shown below.
faiss={"components": "IVF,PQ384x4fs", "sample": 0.05}
-
Approximate Nearest Neighbors Oh Yeah
If you want to experiment with vector stores, you can do that locally with something like faiss which has good platform support: https://github.com/facebookresearch/faiss
Doing full retrieval-augmented generation (RAG) and getting LLMs to interpret the results has more steps but you get a lot of flexibility, and there's no standard best-practice. When you use a vector DB you get the most similar texts back (or an index integer in the case of faiss), you then feed those to an LLM like a normal prompt.
The codifer for the RAG workflow is LangChain, but their demo is substantially more complex and harder-to-use than even a homegrown implementation: https://news.ycombinator.com/item?id=36725982
-
Can someone please help me with this problem?
According to this documentation page, faiss-gpu is only supported on Linux, not on Windows.
-
Ask HN: Are there any unsolved problems with vector databases
Indexes for vector databases in high dimensions are nowhere near are effective as the 2-d indexes used in GIS or the 1-d B-tree indexes that are commonly used in databases.
Back around 2005 I was interested in similarity search and read a lot of conference proceedings on the top and was basically depressed at the state of vector database indexes and felt that at least for the systems I was prototyping I was OK with a full scan and later in 2013 I had the assignment of getting a search engine for patents using vector embeddings in front of customers and we got performance we found acceptable with full scan.
My impression today is that the scene is not too different than it was in 2005 but I can't say I haven't missed anything. That is, you have tradeoffs between faster algorithms that miss some results and slower algorithms that are more correct.
I think it's already a competitive business. You have Pinecone which had the good fortune of starting before the gold rush. Many established databases are adding vector extension. I know so many engineering managers who love postgresql and they're just going to load a vector extension and go. My RSS reader YOShInOn uses SBERT embeddings to cluster and classify text and certainly More Like This and semantic search are on the agenda, I'd expect it to take about an hour to get
https://github.com/facebookresearch/faiss
up and working, I could spend more time stuck on some "little" front end problem like getting something to look right in Bootstrap than it would take to get working.
I can totally believe somebody could make a better vector db than what's out there but will it be better enough? A startup going through YC now could spend 2-3 to get a really good product and find customers and that is forever in a world where everybody wants to build AI applications right now.
-
Code Search with Vector Embeddings: A Transformer's Approach
As the size of the codebase grows, storing and searching through embeddings in memory becomes inefficient. This is where vector databases come into play. Tools like Milvus, Faiss, and others are designed to handle large-scale vector data and provide efficient similarity search capabilities. I've wrtten about how to also use sqlite to store vector embeddings. By integrating a vector database, you can scale your code search tool to handle much larger codebases without compromising on search speed.
-
Unum: Vector Search engine in a single file
But FAISS has their own version ("FastScan") https://github.com/facebookresearch/faiss/wiki/Fast-accumula...
-
Introduction to Vector Similarity Search
https://github.com/facebookresearch/faiss
What are some alternatives?
Weaviate - Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
annoy - Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
Milvus - A cloud-native vector database, storage for next generation AI applications
hnswlib - Header-only C++/python library for fast approximate nearest neighbors
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
pgvector - Open-source vector similarity search for Postgres
vault-ai - OP Vault ChatGPT: Give ChatGPT long-term memory using the OP Stack (OpenAI + Pinecone Vector Database). Upload your own custom knowledge base files (PDF, txt, epub, etc) using a simple React frontend.
marqo - Tensor search for humans. [Moved to: https://github.com/marqo-ai/marqo]