sentence-transformers
faiss
Our great sponsors
sentence-transformers | faiss | |
---|---|---|
45 | 71 | |
13,793 | 28,202 | |
4.5% | 4.4% | |
9.2 | 9.4 | |
2 days ago | 2 days ago | |
Python | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sentence-transformers
-
External vectorization
txtai is an open-source first system. Given it's own open-source roots, like-minded projects such as sentence-transformers are prioritized during development. But that doesn't mean txtai can't work with Embeddings API services.
-
[D] Looking for a better multilingual embedding model
Ok great. My use case is not very specific, but rather general. I am looking for a model that can perform asymmetric semantic search for the languages I mentioned earlier (Urdu, Persian, Arabic etc.). I have also looked into the sentence-transformer training documentation. Do you think it would be a good idea to use the XNLI dataset for fine-tuning? Or maybe you can suggest much better dataset. Furthermore, I am not sure if fine-tuning is suitable for my task. Because my use case is general so I can use already trained model.
- Best pathway for Domain Adaptation with Sentence Transformers?
-
Syntactic and Semantic surprisal using a LLM
The task you are looking for is semantic textual similarity. There are a few models and datasets out there that can do this. I'd probably start with the SemEval2017 Task 1 task description and competition entries here and then work outward from there (using something like SemanticScholar or Papers With Code to find newer state of the art works that cite these models if needed). For what it's worth you might find that Sentence Bert (SBERT) gives good vectors for cosine similarity comparison out of the box for this task.
-
Mean pooling in BERT
Check out the sentence-transformers implementation. If I don't miss anything they don't exclude CLS when the pooling strategy is set to 'mean'
-
I Built an AI Search Engine that can find exact timestamps for anything on Youtube using OpenAI Whisper
Break up transcript into shorter segments and convert segments to a 768 vector array. Use a process known as embedding using our second ML model, UKP Labs BERT’s sentence transformer model.
-
Seeking advice on improving NLP search results
Not sure what kind of texts you have, but these models have a max sequence limit of 512 (approx 350 words or so). If you're texts are longer than that, consider splitting them up into chunks or creating a summary and taking an embedding of that. Some clustering algorithm may be the way to go here. Here's a bunch of examples. I use agglomerative for my use case.
-
Dev Diary #12 - Finetune model
https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/data_augmentation (Augmented Encoding)
-
[R] Customize size of Bio-BERT pre-trained embeddings
For vector representation you can take the mean and then pca to get the size that you want, but if you have time then use sentence transformers to train a vector representation instead.
- SentenceTransformer producing different sentence embedding results in Docker
faiss
-
Haystack DB – 10x faster than FAISS with binary embeddings by default
There are also FAISS binary indexes[0], so it'd be great to compare binary index vs binary index. Otherwise it seems a little misleading to say it is a FAISS vs not FAISS comparison, since really it would be a binary index vs not binary index comparison. I'm not too familiar with binary indexes, so if there's a significant difference between the types of binary index then it'd be great to explain what that is too.
[0] https://github.com/facebookresearch/faiss/wiki/Binary-indexe...
-
Show HN: Chromem-go – Embeddable vector database for Go
Or just use FAISS https://github.com/facebookresearch/faiss
- OpenAI: New embedding models and API updates
-
You Shouldn't Invest in Vector Databases?
You can try txtai (https://github.com/neuml/txtai) with a Faiss backend.
This Faiss wiki article might help (https://github.com/facebookresearch/faiss/wiki/Indexing-1G-v...).
For example, a partial Faiss configuration with 4-bit PQ quantization and only using 5% of the data to train an IVF index is shown below.
faiss={"components": "IVF,PQ384x4fs", "sample": 0.05}
-
Approximate Nearest Neighbors Oh Yeah
If you want to experiment with vector stores, you can do that locally with something like faiss which has good platform support: https://github.com/facebookresearch/faiss
Doing full retrieval-augmented generation (RAG) and getting LLMs to interpret the results has more steps but you get a lot of flexibility, and there's no standard best-practice. When you use a vector DB you get the most similar texts back (or an index integer in the case of faiss), you then feed those to an LLM like a normal prompt.
The codifer for the RAG workflow is LangChain, but their demo is substantially more complex and harder-to-use than even a homegrown implementation: https://news.ycombinator.com/item?id=36725982
-
Can someone please help me with this problem?
According to this documentation page, faiss-gpu is only supported on Linux, not on Windows.
-
Ask HN: Are there any unsolved problems with vector databases
Indexes for vector databases in high dimensions are nowhere near are effective as the 2-d indexes used in GIS or the 1-d B-tree indexes that are commonly used in databases.
Back around 2005 I was interested in similarity search and read a lot of conference proceedings on the top and was basically depressed at the state of vector database indexes and felt that at least for the systems I was prototyping I was OK with a full scan and later in 2013 I had the assignment of getting a search engine for patents using vector embeddings in front of customers and we got performance we found acceptable with full scan.
My impression today is that the scene is not too different than it was in 2005 but I can't say I haven't missed anything. That is, you have tradeoffs between faster algorithms that miss some results and slower algorithms that are more correct.
I think it's already a competitive business. You have Pinecone which had the good fortune of starting before the gold rush. Many established databases are adding vector extension. I know so many engineering managers who love postgresql and they're just going to load a vector extension and go. My RSS reader YOShInOn uses SBERT embeddings to cluster and classify text and certainly More Like This and semantic search are on the agenda, I'd expect it to take about an hour to get
https://github.com/facebookresearch/faiss
up and working, I could spend more time stuck on some "little" front end problem like getting something to look right in Bootstrap than it would take to get working.
I can totally believe somebody could make a better vector db than what's out there but will it be better enough? A startup going through YC now could spend 2-3 to get a really good product and find customers and that is forever in a world where everybody wants to build AI applications right now.
-
Code Search with Vector Embeddings: A Transformer's Approach
As the size of the codebase grows, storing and searching through embeddings in memory becomes inefficient. This is where vector databases come into play. Tools like Milvus, Faiss, and others are designed to handle large-scale vector data and provide efficient similarity search capabilities. I've wrtten about how to also use sqlite to store vector embeddings. By integrating a vector database, you can scale your code search tool to handle much larger codebases without compromising on search speed.
-
Unum: Vector Search engine in a single file
But FAISS has their own version ("FastScan") https://github.com/facebookresearch/faiss/wiki/Fast-accumula...
-
Introduction to Vector Similarity Search
https://github.com/facebookresearch/faiss
What are some alternatives?
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
annoy - Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk
onnx - Open standard for machine learning interoperability
Milvus - A cloud-native vector database, storage for next generation AI applications
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
hnswlib - Header-only C++/python library for fast approximate nearest neighbors
Top2Vec - Top2Vec learns jointly embedded topic, document and word vectors.
pgvector - Open-source vector similarity search for Postgres
txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
Weaviate - Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
datasets - 🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/