yake
faiss
yake | faiss | |
---|---|---|
5 | 77 | |
1,656 | 31,824 | |
0.8% | 2.2% | |
3.0 | 9.6 | |
11 months ago | 2 days ago | |
Python | C++ | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
yake
- Show HN: Whisper.cpp and YAKE to Analyse Voice Reflections [iOS]
-
Simplest keyword extractor
Personally I prefer using YAKE.
- What method should be used to tag specific texts, when the dataset is too small for training a model?
- Is there any YAKE (yet another keyword extractor) implementation in R? Unsupervised Approach for Automatic Keyword Extraction using Text Statistical Features.
-
Alternate approaches to TF-IDF?
You can look for usage here: https://github.com/LIAAD/yake and there is also a reference section with publications for more details of how this works. From what I remember, each keyphrase candidate is assigned an aggregated score based on various features: position in the text, casing, frequency, surrounding text frequency...
faiss
-
Langchain — RAG — Retrieval Augmented Generation
FAISS Documentation
-
IndexIVFFlat y IndexIVFPQ
Citations: [1] https://www.pinecone.io/learn/series/faiss/faiss-tutorial/ [2] https://www.pinecone.io/learn/series/faiss/product-quantization/ [3] https://www.pinecone.io/learn/series/faiss/composite-indexes/ [4] https://github.com/facebookresearch/faiss/wiki/Faiss-indexes/9df19586b3a75e4cb1c2fb915f2c695755a599b8 [5] https://faiss.ai/cpp_api/struct/structfaiss_1_1IndexIVFFlat.html [6] https://pub.towardsai.net/unlocking-the-power-of-efficient-vector-search-in-rag-applications-c2e3a0c551d5?gi=71a82e3ea10e [7] https://www.pingcap.com/article/mastering-faiss-vector-database-a-beginners-handbook/ [8] https://wangzwhu.github.io/home/file/acmmm-t-part3-ann.pdf [9] https://github.com/alonsoir/ubiquitous-carnival/blob/main/contextual-data-faiss-IndexIVFPQ.py [10] https://github.com/alonsoir/ubiquitous-carnival/blob/main/contextual-data-faiss-indexivfflat.py
-
Embeddings index format for open data access
Each file can be read without txtai. JSON, MessagePack and Faiss all have libraries in multiple programming languages.
-
Is My Approach to Vectorizing and Storing 1.5T Tokens Reasonable?
Here’s the text formatted for Stack Overflow:
---
I'm planning to index and store 1.5 trillion tokens using Faiss and would love some feedback on my approach:
1. *Partitioning:* I'm thinking of using distributed k-means and inverted multi-index quantizers for efficient data partitioning.
2. *On-Disk Storage:* Due to the scale, I'm storing everything on disk using a Compressed Sparse Row format.
3. *Distributed Search:* I plan to implement a client-server model with multiple servers to handle search operations.
Does this approach sound feasible, or am I overlooking something crucial? Any advice or suggestions?
I'm mostly working off of this article: [Indexing 1T Vectors](https://github.com/facebookresearch/faiss/wiki/Indexing-1T-vectors). I think the data is too big for AutoFaiss, but I can use that for experiments.
-
Introducing vectorlite: A Fast and Tunable Vector Search Extension for SQLite
Sqlite-vss uses faiss to do vector seaching. It is a great library opensourced by Meta(facebook) and provides a wide range of algorithms for vector search. However, it is optimized for batch operations over a large dataset, making it slow for a single vector query and incremental indexing on CPU. However, SQLite's extensibility model (called virtual table) doesn't provide APIs for batch operations and only exposes API to insert/update/delete a single row at a time. Besides, sqlite-vss only support single-vector search, which faiss is not good at. As a result, sqlite-vss can't fully exploit faiss's performance.
-
OpenAI api RAG system with Qdrant
You can swap out any of the components in this project with something else. You could use Faiss instead of qdrant, you could use OpenAI models for everything(embeddings/chat completion) or you could use open models.
-
Haystack DB – 10x faster than FAISS with binary embeddings by default
There are also FAISS binary indexes[0], so it'd be great to compare binary index vs binary index. Otherwise it seems a little misleading to say it is a FAISS vs not FAISS comparison, since really it would be a binary index vs not binary index comparison. I'm not too familiar with binary indexes, so if there's a significant difference between the types of binary index then it'd be great to explain what that is too.
[0] https://github.com/facebookresearch/faiss/wiki/Binary-indexe...
-
Show HN: Chromem-go – Embeddable vector database for Go
Or just use FAISS https://github.com/facebookresearch/faiss
- OpenAI: New embedding models and API updates
-
You Shouldn't Invest in Vector Databases?
You can try txtai (https://github.com/neuml/txtai) with a Faiss backend.
This Faiss wiki article might help (https://github.com/facebookresearch/faiss/wiki/Indexing-1G-v...).
For example, a partial Faiss configuration with 4-bit PQ quantization and only using 5% of the data to train an IVF index is shown below.
faiss={"components": "IVF,PQ384x4fs", "sample": 0.05}