RETRO-pytorch
faiss
RETRO-pytorch | faiss | |
---|---|---|
2 | 77 | |
849 | 31,824 | |
- | 1.9% | |
2.8 | 9.6 | |
about 1 year ago | 1 day ago | |
Python | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RETRO-pytorch
-
[D] Any pre trained retrieval based language models available?
There's a Github project that an individual put together based on the RETRO paper. If you checkout the issues list there is some info on work on a pretrained model.
-
[D] Is there an open-source implementation of the Retrieval-Enhanced Transformer (RETRO)?
i'll give it a shot https://github.com/lucidrains/RETRO-pytorch 👍
faiss
-
Langchain — RAG — Retrieval Augmented Generation
FAISS Documentation
-
IndexIVFFlat y IndexIVFPQ
Citations: [1] https://www.pinecone.io/learn/series/faiss/faiss-tutorial/ [2] https://www.pinecone.io/learn/series/faiss/product-quantization/ [3] https://www.pinecone.io/learn/series/faiss/composite-indexes/ [4] https://github.com/facebookresearch/faiss/wiki/Faiss-indexes/9df19586b3a75e4cb1c2fb915f2c695755a599b8 [5] https://faiss.ai/cpp_api/struct/structfaiss_1_1IndexIVFFlat.html [6] https://pub.towardsai.net/unlocking-the-power-of-efficient-vector-search-in-rag-applications-c2e3a0c551d5?gi=71a82e3ea10e [7] https://www.pingcap.com/article/mastering-faiss-vector-database-a-beginners-handbook/ [8] https://wangzwhu.github.io/home/file/acmmm-t-part3-ann.pdf [9] https://github.com/alonsoir/ubiquitous-carnival/blob/main/contextual-data-faiss-IndexIVFPQ.py [10] https://github.com/alonsoir/ubiquitous-carnival/blob/main/contextual-data-faiss-indexivfflat.py
-
Embeddings index format for open data access
Each file can be read without txtai. JSON, MessagePack and Faiss all have libraries in multiple programming languages.
-
Is My Approach to Vectorizing and Storing 1.5T Tokens Reasonable?
Here’s the text formatted for Stack Overflow:
---
I'm planning to index and store 1.5 trillion tokens using Faiss and would love some feedback on my approach:
1. *Partitioning:* I'm thinking of using distributed k-means and inverted multi-index quantizers for efficient data partitioning.
2. *On-Disk Storage:* Due to the scale, I'm storing everything on disk using a Compressed Sparse Row format.
3. *Distributed Search:* I plan to implement a client-server model with multiple servers to handle search operations.
Does this approach sound feasible, or am I overlooking something crucial? Any advice or suggestions?
I'm mostly working off of this article: [Indexing 1T Vectors](https://github.com/facebookresearch/faiss/wiki/Indexing-1T-vectors). I think the data is too big for AutoFaiss, but I can use that for experiments.
-
Introducing vectorlite: A Fast and Tunable Vector Search Extension for SQLite
Sqlite-vss uses faiss to do vector seaching. It is a great library opensourced by Meta(facebook) and provides a wide range of algorithms for vector search. However, it is optimized for batch operations over a large dataset, making it slow for a single vector query and incremental indexing on CPU. However, SQLite's extensibility model (called virtual table) doesn't provide APIs for batch operations and only exposes API to insert/update/delete a single row at a time. Besides, sqlite-vss only support single-vector search, which faiss is not good at. As a result, sqlite-vss can't fully exploit faiss's performance.
-
OpenAI api RAG system with Qdrant
You can swap out any of the components in this project with something else. You could use Faiss instead of qdrant, you could use OpenAI models for everything(embeddings/chat completion) or you could use open models.
-
Haystack DB – 10x faster than FAISS with binary embeddings by default
There are also FAISS binary indexes[0], so it'd be great to compare binary index vs binary index. Otherwise it seems a little misleading to say it is a FAISS vs not FAISS comparison, since really it would be a binary index vs not binary index comparison. I'm not too familiar with binary indexes, so if there's a significant difference between the types of binary index then it'd be great to explain what that is too.
[0] https://github.com/facebookresearch/faiss/wiki/Binary-indexe...
-
Show HN: Chromem-go – Embeddable vector database for Go
Or just use FAISS https://github.com/facebookresearch/faiss
- OpenAI: New embedding models and API updates
-
You Shouldn't Invest in Vector Databases?
You can try txtai (https://github.com/neuml/txtai) with a Faiss backend.
This Faiss wiki article might help (https://github.com/facebookresearch/faiss/wiki/Indexing-1G-v...).
For example, a partial Faiss configuration with 4-bit PQ quantization and only using 5% of the data to train an IVF index is shown below.
faiss={"components": "IVF,PQ384x4fs", "sample": 0.05}
What are some alternatives?
CoCa-pytorch - Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
annoy - Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk
TorchPQ - Approximate nearest neighbor search with product quantization on GPU in pytorch and cuda
Milvus - A cloud-native vector database, storage for next generation AI applications
deepmind-research - This repository contains implementations and illustrative code to accompany DeepMind publications
hnswlib - Header-only C++/python library for fast approximate nearest neighbors
retomaton - PyTorch code for the RetoMaton paper: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022)
pgvector - Open-source vector similarity search for Postgres
SHREC2023-ANIMAR - Source codes of team TikTorch (1st place solution) for track 2 and 3 of the SHREC2023 Challenge
Weaviate - Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
RetGen
qdrant - Qdrant - High-performance, massive-scale Vector Database and Vector Search Engine for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/