Fast_Sentence_Embeddings
semantic-search-tweets
Fast_Sentence_Embeddings | semantic-search-tweets | |
---|---|---|
3 | 2 | |
603 | 38 | |
- | - | |
0.0 | 1.1 | |
about 1 year ago | about 1 year ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 only | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Fast_Sentence_Embeddings
-
The Illustrated Word2Vec
This is a great guide.
Also - despite the fact that language model embedding [1] are currently the hot rage, good old embedding models are more than good enough for most tasks.
With just a bit of tuning, they're generally as good at many sentence embedding tasks [2], and with good libraries [3] you're getting something like 400k sentence/sec on laptop CPU versus ~4k-15k sentences/sec on a v100 for LM embeddings.
When you should use language model embeddings:
- Multilingual tasks. While some embedding models are multilingual aligned (eg. MUSE [4]), you still need to route the sentence to the correct embedding model file (you need something like langdetect). It's also cumbersome, with one 400mb file per language.
For LM embedding models, many are multilingual aligned right away.
- Tasks that are very context specific or require fine-tuning. For instance, if you're making a RAG system for medical documents, the embedding space is best when it creates larger deviations for the difference between seemingly-related medical words.
This means models with more embedding dimensions, and heavily favors LM models over classic embedding models.
1. sbert.net
2. https://collaborate.princeton.edu/en/publications/a-simple-b...
3. https://github.com/oborchers/Fast_Sentence_Embeddings
4. https://github.com/facebookresearch/MUSE
-
You probably shouldn't use OpenAI's embeddings
You can find some comparisons and evaluation datasets/tasks here: https://www.sbert.net/docs/pretrained_models.html
Generally MiniLM is a good baseline. For faster models you want this library:
https://github.com/oborchers/Fast_Sentence_Embeddings
For higher quality ones, just take the bigger/slower models in the SentenceTransformers library
-
[D] Unsupervised document similarity state of the art
Links: fse: https://github.com/oborchers/Fast_Sentence_Embeddings Sentence-transformers: https://github.com/oborchers/sentence-transformers
semantic-search-tweets
-
You probably shouldn't use OpenAI's embeddings
It's in the repo:
You first create embeddings. What is this? It's an n-dimensional vector space with your tweets 'embedded' in that space. Each word is an n-dimensional vector in this space. The vectorization is supposed to maintain 'semantic distance'. Basically, if two words are very close in meaning or related (by say frequently appearing next to each other in corpus) they should be 'close' in some of those n-dimensions as well. The result at the end is the '.bin' file, the 'semantic model' of your corpus.
https://github.com/dbasch/semantic-search-tweets/blob/main/e...
For semantic search, you run the same embedding algorithm against the query and take the resultant vectors and do similarity search via matrix ops, resulting in a set of results, with probabilities. These point back to the original source, here the tweets, and you just print the tweet(s) that you select from that result set.
https://github.com/dbasch/semantic-search-tweets/blob/main/s...
Experts can chime in here but there are knobs such as 'batch size' and the functions you use to index. (cosine was used here.)
So the various performance dimensions of the process should also be clear. There is a fixed cost of making the embeddings of your data. There is a per-op embedding of your query, and then running the similarity algorithm to find the result set.
What are some alternatives?
gensim - Topic Modelling for Humans
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
smaller-labse - Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE
cso-classifier - Python library that classifies content from scientific papers with the topics of the Computer Science Ontology (CSO).
kgtk - Knowledge Graph Toolkit
RecSys_Course_AT_PoliMi - This is the official repository for the Recommender Systems course at Politecnico di Milano.
sentence-transformers - Sentence Embeddings with BERT & XLNet
wembedder - Wikidata embedding