Research2Vec
Fast_Sentence_Embeddings
Research2Vec | Fast_Sentence_Embeddings | |
---|---|---|
3 | 3 | |
194 | 603 | |
- | - | |
0.0 | 0.0 | |
about 3 years ago | about 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
- | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Research2Vec
- [P] 20K+ Arxiv ML Papers Vectorised, Cluster Application and Projector
-
20k+ ML Research Papers Vectorised + Clustered + Visualised! [OC]
In recent years, the number of research papers have grown tremendously. New areas are popping up everyday but it is not exactly clear which areas are emerging or which interesting new area has just surfaced up. I decided to cluster together 20k+ interesting machine learning papers that were recently surfaced up. Cluster Application: https://cloud.relevance.ai/dataset/research2vec/deploy/cluster/jacky-wong/M0FQOVdINEJZQTVzdWJmNHdQaXI6M1NIMVFncm9TNENZeU1vNUNHTUVWZw/60\_dWH4Bq8SHcPzXrEpF Embeddings Projector: https://cloud.relevance.ai/dataset/research2vec/deploy/projector/jacky-wong/NXNzdjUzNEIxczVzVVpOdUpabXE6TE92enhOZ1VTN2labDlocVZNNDlMUQ/4zQk534BY7n37LD0yk4A/old-australia-east/ I created the vectors using a fine-tuned version of Sentence Transformer's roberta-base model. What I scoped out from the problem: The training had to be unsupervised because no one would have any idea what was in the dataset An NLP embeddings-based approach with unsupervised clustering would be the simplest way to surface insights Interesting New Topics I Discovered Federated Learning,and Graph GANs were really interesting topics, along with the growth of Representation Learning Solution In order to get some form of off-the-shelf domain adaptation, I used off-the-shelf BART for unsupervised query generation and then fine-tuned my roberta embeddings using multiple negative rankings loss based on SentenceTransformers. This seemed to work quite well as the topics seemed to have separated out quite nicely in my embeddings projector. I then trained my model on the title and abstract of the research papers so that the model could better understand some of the data. Afterwards, I encoded the titles and clustered them using a simple K Means algorithm. Dataset The dataset curation process was fairly straightforward. I used the arxiv API and scraped 20k papers off the query "machine learning" sometime in late 2020 before I began experimenting with the work. I am looking to get feedback on what others would like to see in this application and would be curious to hear suggestions on where I could improve. From previous research, I did find this repository: https://github.com/Santosh-Gupta/Research2Vec However, as the dataset was different, I was unable to use the exact method provided. Disclaimer: I currently work for Relevance AI (the company behind the projector).
- 20k+ ML Research Papers Vectorised + Clustered + Visualised!
Fast_Sentence_Embeddings
-
The Illustrated Word2Vec
This is a great guide.
Also - despite the fact that language model embedding [1] are currently the hot rage, good old embedding models are more than good enough for most tasks.
With just a bit of tuning, they're generally as good at many sentence embedding tasks [2], and with good libraries [3] you're getting something like 400k sentence/sec on laptop CPU versus ~4k-15k sentences/sec on a v100 for LM embeddings.
When you should use language model embeddings:
- Multilingual tasks. While some embedding models are multilingual aligned (eg. MUSE [4]), you still need to route the sentence to the correct embedding model file (you need something like langdetect). It's also cumbersome, with one 400mb file per language.
For LM embedding models, many are multilingual aligned right away.
- Tasks that are very context specific or require fine-tuning. For instance, if you're making a RAG system for medical documents, the embedding space is best when it creates larger deviations for the difference between seemingly-related medical words.
This means models with more embedding dimensions, and heavily favors LM models over classic embedding models.
1. sbert.net
2. https://collaborate.princeton.edu/en/publications/a-simple-b...
3. https://github.com/oborchers/Fast_Sentence_Embeddings
4. https://github.com/facebookresearch/MUSE
-
You probably shouldn't use OpenAI's embeddings
You can find some comparisons and evaluation datasets/tasks here: https://www.sbert.net/docs/pretrained_models.html
Generally MiniLM is a good baseline. For faster models you want this library:
https://github.com/oborchers/Fast_Sentence_Embeddings
For higher quality ones, just take the bigger/slower models in the SentenceTransformers library
-
[D] Unsupervised document similarity state of the art
Links: fse: https://github.com/oborchers/Fast_Sentence_Embeddings Sentence-transformers: https://github.com/oborchers/sentence-transformers
What are some alternatives?
gensim - Topic Modelling for Humans
smaller-labse - Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE
cso-classifier - Python library that classifies content from scientific papers with the topics of the Computer Science Ontology (CSO).
kgtk - Knowledge Graph Toolkit
RecSys_Course_AT_PoliMi - This is the official repository for the Recommender Systems course at Politecnico di Milano.
sentence-transformers - Sentence Embeddings with BERT & XLNet
wembedder - Wikidata embedding
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs