Fast_Sentence_Embeddings
gensim
Our great sponsors
Fast_Sentence_Embeddings | gensim | |
---|---|---|
3 | 18 | |
603 | 15,236 | |
- | 1.2% | |
0.0 | 7.5 | |
about 1 year ago | about 22 hours ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 only | GNU Lesser General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Fast_Sentence_Embeddings
-
The Illustrated Word2Vec
This is a great guide.
Also - despite the fact that language model embedding [1] are currently the hot rage, good old embedding models are more than good enough for most tasks.
With just a bit of tuning, they're generally as good at many sentence embedding tasks [2], and with good libraries [3] you're getting something like 400k sentence/sec on laptop CPU versus ~4k-15k sentences/sec on a v100 for LM embeddings.
When you should use language model embeddings:
- Multilingual tasks. While some embedding models are multilingual aligned (eg. MUSE [4]), you still need to route the sentence to the correct embedding model file (you need something like langdetect). It's also cumbersome, with one 400mb file per language.
For LM embedding models, many are multilingual aligned right away.
- Tasks that are very context specific or require fine-tuning. For instance, if you're making a RAG system for medical documents, the embedding space is best when it creates larger deviations for the difference between seemingly-related medical words.
This means models with more embedding dimensions, and heavily favors LM models over classic embedding models.
1. sbert.net
2. https://collaborate.princeton.edu/en/publications/a-simple-b...
3. https://github.com/oborchers/Fast_Sentence_Embeddings
4. https://github.com/facebookresearch/MUSE
-
You probably shouldn't use OpenAI's embeddings
You can find some comparisons and evaluation datasets/tasks here: https://www.sbert.net/docs/pretrained_models.html
Generally MiniLM is a good baseline. For faster models you want this library:
https://github.com/oborchers/Fast_Sentence_Embeddings
For higher quality ones, just take the bigger/slower models in the SentenceTransformers library
-
[D] Unsupervised document similarity state of the art
Links: fse: https://github.com/oborchers/Fast_Sentence_Embeddings Sentence-transformers: https://github.com/oborchers/sentence-transformers
gensim
- Aggregating news from different sources
-
Understanding How Dynamic node2vec Works on Streaming Data
This is our optimization problem. Now, we hope that you have an idea of what our goal is. Luckily for us, this is already implemented in a Python module called gensim. Yes, these guys are brilliant in natural language processing and we will make use of it. 🤝
-
Topic modeling --- allow multiple topics per statement
Try LDA as implemented in gemsin https://github.com/RaRe-Technologies/gensim
-
Is it home bias or is data wrangling for machine learning in python much less intuitive and much more burdensome than in R?
Standout python NLP libraries include Spacy and Gensim, as well as pre-trained model availability in Hugginface. These libraries have widespread use in and support from industry and it shows. Spacy has best-in-class methods for pre-processing text for further applications. Gensim helps you manage your corpus of documents, and contains a lot of different tools for solving a common industry task, topic modeling.
- sentence transformer vector dimensionality reduction to 1
- Where to start for recommendation systems
-
GET STARTED WITH TOPIC MODELLING USING GENSIM IN NLP
Here we have to install the gensim library in a jupyter notebook to be able to use it in our project, consider the code below;
-
Show HN: I built a site that summarizes articles and PDFs using NLP
Nice work! I wonder if you're going the same challenges that gensim had for being generic in summarization.
For context:
> Despite its general-sounding name, the module will not satisfy the majority of use cases in production and is likely to waste people's time.
https://github.com/RaRe-Technologies/gensim/wiki/Migrating-f...
-
[Research] Text summarization using Python, that can run on Android devices?
TextRank will work without any problems. https://radimrehurek.com/gensim/
-
Topic modelling with Gensim and SpaCy on startup news
For the topic modelling itself, I am going to use Gensim library by Radim Rehurek, which is very developer friendly and easy to use.
What are some alternatives?
smaller-labse - Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE
BERTopic - Leveraging BERT and c-TF-IDF to create easily interpretable topics.
cso-classifier - Python library that classifies content from scientific papers with the topics of the Computer Science Ontology (CSO).
scikit-learn - scikit-learn: machine learning in Python
kgtk - Knowledge Graph Toolkit
MLflow - Open source platform for the machine learning lifecycle
RecSys_Course_AT_PoliMi - This is the official repository for the Recommender Systems course at Politecnico di Milano.
tensorflow - An Open Source Machine Learning Framework for Everyone
sentence-transformers - Sentence Embeddings with BERT & XLNet
Keras - Deep Learning for humans
wembedder - Wikidata embedding
flair - A very simple framework for state-of-the-art Natural Language Processing (NLP)