contextualized-topic-models
Top2Vec
Our great sponsors
contextualized-topic-models | Top2Vec | |
---|---|---|
7 | 13 | |
1,163 | 2,843 | |
1.7% | - | |
5.0 | 7.0 | |
3 months ago | 5 months ago | |
Python | Python | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
contextualized-topic-models
-
[Project]Topic modelling of tweets from the same user
In our experiments, CTM works well with tweets: https://github.com/MilaNLProc/contextualized-topic-models (I'm one of the authors)
-
Extract words from large data set of reviews by sentiment
Use CTM https://github.com/MilaNLProc/contextualized-topic-models with sentiment labels to built distribution of words over labels
-
Using Transformer for Topic Modeling - what are the options?
This library from MILA seems quite neat! I haven’t had the change to play with it though : https://github.com/MilaNLProc/contextualized-topic-models
-
Catogorize the Data- Topic Modelling algorithm
a bit of shameless self-promotion, but we developed a topic model (https://github.com/MilaNLProc/contextualized-topic-models) that actually supports that use case!
-
(NLP) Best practices for topic modeling and generating interesting topics?
If you use CTM, you can provide the topic model two inputs: the preprocessed texts (that will be used by the topic model to generate the topical words) and the unpreprocessed texts (to generate the contextualized representations that will be later concatenated to the document bag-of-word representation). We saw that this slightly improves the performance instead of providing BERT the already-preprocessed text. This feature is supported in the original implementation of CTM, not in OCTIS. See here: https://github.com/MilaNLProc/contextualized-topic-models#combined-topic-model
-
Latest trends in topic modelling?
Cross-lingual Contextualized Topic Models with Zero-shot Learning from a team at MilaNLP which uses bag of words representations in combination with multi lingual embeddings from SBERT and works like a VAE (encode the input, use the encoded representation to decode back to a bag of words as close to the input as possible). Using SBERT embeddings makes their model generalise for other languages which may be useful. One major shortfall of this model as I understand is that it can't deal with long documents very elegantly - only up to BERT'S word limit (the workaround is to truncate and use the first words)
Top2Vec
-
[D] Is it better to create a different set of Doc2Vec embeddings for each group in my dataset, rather than generating embeddings for the entire dataset?
I'm using Top2Vec with Doc2Vec embeddings to find topics in a dataset of ~4000 social media posts. This dataset has three groups:
-
Tips for best Top2Vec (HDBSCAN) usage
I asked in a previous post for advice about how to find insight in unstructured text data. Almost everyone recommended BERTopic, but I wasn't able to run BERTopic on my machine locally (segmentation fault). Fortunately, I found Top2Vec, which uses HBDSCAN and UMAP to quickly find good topics in uncleaned(!) text data.
- How can I group domain specific keywords based on their word embeddings?
-
Introducing the Semantic Graph
A number of excellent topic modeling libraries exist in Python today. BERTopic and Top2Vec are two of the most popular. Both use sentence-transformers to encode data into vectors, UMAP for dimensionality reduction and HDBSCAN to cluster nodes.
- Top2Vec: Embed topics, documents and word vectors
- How to cluster articles about software vulnerabilities?
- Ciencia de Dados - Classificacao de texto
-
Extracting topics from 250k facebook posts
Since you already have the facebook posts, you can use top2vec https://github.com/ddangelov/Top2Vec
- [D] Good algorithm for clustering big data (sentences represented as embeddings)?
-
SOTA for Topic Modeling
Here's an implementation that uses UMAP and HDBSCAN: https://github.com/ddangelov/Top2Vec but you could use a semi-supervised algorithm in the clustering step if you wanted specific topics.
What are some alternatives?
BERTopic - Leveraging BERT and c-TF-IDF to create easily interpretable topics.
OCTIS - OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
PolyFuzz - Fuzzy string matching, grouping, and evaluation.
faiss - A library for efficient similarity search and clustering of dense vectors.
tika-python - Tika-Python is a Python binding to the Apache Tika™ REST services allowing Tika to be called natively in the Python community.
Milvus - A cloud-native vector database, storage for next generation AI applications
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
hdbscan - A high performance implementation of HDBSCAN clustering.
Sentimentanalysis - Language independent sentiment analysis
GuidedLDA - semi supervised guided topic model with custom guidedLDA