Fast_Sentence_Embeddings
magnitude
Our great sponsors
Fast_Sentence_Embeddings | magnitude | |
---|---|---|
3 | 5 | |
603 | 1,611 | |
- | -0.1% | |
0.0 | 0.0 | |
about 1 year ago | 9 months ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Fast_Sentence_Embeddings
-
The Illustrated Word2Vec
This is a great guide.
Also - despite the fact that language model embedding [1] are currently the hot rage, good old embedding models are more than good enough for most tasks.
With just a bit of tuning, they're generally as good at many sentence embedding tasks [2], and with good libraries [3] you're getting something like 400k sentence/sec on laptop CPU versus ~4k-15k sentences/sec on a v100 for LM embeddings.
When you should use language model embeddings:
- Multilingual tasks. While some embedding models are multilingual aligned (eg. MUSE [4]), you still need to route the sentence to the correct embedding model file (you need something like langdetect). It's also cumbersome, with one 400mb file per language.
For LM embedding models, many are multilingual aligned right away.
- Tasks that are very context specific or require fine-tuning. For instance, if you're making a RAG system for medical documents, the embedding space is best when it creates larger deviations for the difference between seemingly-related medical words.
This means models with more embedding dimensions, and heavily favors LM models over classic embedding models.
1. sbert.net
2. https://collaborate.princeton.edu/en/publications/a-simple-b...
3. https://github.com/oborchers/Fast_Sentence_Embeddings
4. https://github.com/facebookresearch/MUSE
-
You probably shouldn't use OpenAI's embeddings
You can find some comparisons and evaluation datasets/tasks here: https://www.sbert.net/docs/pretrained_models.html
Generally MiniLM is a good baseline. For faster models you want this library:
https://github.com/oborchers/Fast_Sentence_Embeddings
For higher quality ones, just take the bigger/slower models in the SentenceTransformers library
-
[D] Unsupervised document similarity state of the art
Links: fse: https://github.com/oborchers/Fast_Sentence_Embeddings Sentence-transformers: https://github.com/oborchers/sentence-transformers
magnitude
-
Text Classification Library for a Quick Baseline
(3) FastText now supports multiple languages [2].
[1] https://github.com/plasticityai/magnitude#pre-converted-magn...
-
Pgvector – vector similarity search for Postgres
Check out Magnitude, we built it to solve that problem: https://github.com/plasticityai/magnitude
It's still loaded from a file, but heavily uses memory-mapping and caching to be speedy and not overload your RAM immediately. And in production scenarios, multiple worker processes can share that memory due to the memory mapping.
Disclaimer: I'm the author.
-
Build an Embeddings index from a data source
General language models from pymagnitude
-
Tutorial series on txtai
Backed by the pymagnitude library. Pre-trained word vectors can be installed from the referenced link.
What are some alternatives?
gensim - Topic Modelling for Humans
flashtext - Extract Keywords from sentence or Replace keywords in sentences.
smaller-labse - Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE
faiss - A library for efficient similarity search and clustering of dense vectors.
cso-classifier - Python library that classifies content from scientific papers with the topics of the Computer Science Ontology (CSO).
pgvector - Open-source vector similarity search for Postgres
kgtk - Knowledge Graph Toolkit
finalfusion-rust - finalfusion embeddings in Rust
RecSys_Course_AT_PoliMi - This is the official repository for the Recommender Systems course at Politecnico di Milano.
Milvus - A cloud-native vector database, storage for next generation AI applications
sentence-transformers - Sentence Embeddings with BERT & XLNet
txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows