contextualized-topic-models
BERTopic
Our great sponsors
contextualized-topic-models | BERTopic | |
---|---|---|
7 | 22 | |
1,157 | 5,519 | |
1.2% | - | |
5.0 | 8.2 | |
3 months ago | 13 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
contextualized-topic-models
-
[Project]Topic modelling of tweets from the same user
In our experiments, CTM works well with tweets: https://github.com/MilaNLProc/contextualized-topic-models (I'm one of the authors)
-
Extract words from large data set of reviews by sentiment
Use CTM https://github.com/MilaNLProc/contextualized-topic-models with sentiment labels to built distribution of words over labels
-
Using Transformer for Topic Modeling - what are the options?
This library from MILA seems quite neat! I haven’t had the change to play with it though : https://github.com/MilaNLProc/contextualized-topic-models
-
Catogorize the Data- Topic Modelling algorithm
a bit of shameless self-promotion, but we developed a topic model (https://github.com/MilaNLProc/contextualized-topic-models) that actually supports that use case!
-
(NLP) Best practices for topic modeling and generating interesting topics?
If you use CTM, you can provide the topic model two inputs: the preprocessed texts (that will be used by the topic model to generate the topical words) and the unpreprocessed texts (to generate the contextualized representations that will be later concatenated to the document bag-of-word representation). We saw that this slightly improves the performance instead of providing BERT the already-preprocessed text. This feature is supported in the original implementation of CTM, not in OCTIS. See here: https://github.com/MilaNLProc/contextualized-topic-models#combined-topic-model
-
Latest trends in topic modelling?
Cross-lingual Contextualized Topic Models with Zero-shot Learning from a team at MilaNLP which uses bag of words representations in combination with multi lingual embeddings from SBERT and works like a VAE (encode the input, use the encoded representation to decode back to a bag of words as close to the input as possible). Using SBERT embeddings makes their model generalise for other languages which may be useful. One major shortfall of this model as I understand is that it can't deal with long documents very elegantly - only up to BERT'S word limit (the workaround is to truncate and use the first words)
BERTopic
-
how can a top2vec output be improved
Try experimenting with different hyperparameters, clustering algorithms and embedding representations. Try https://github.com/MaartenGr/BERTopic/tree/master/bertopic
-
SBERT Embeddings from Conversations
Try out this notebook which comes with the BERTopic repository.
-
Sentence transformers (BERTopic) on a Macbook Air
After some googling, I found this (but for M1 chip Mac) --I wonder if I'm stuck. Is this laptop just not up for the job of working with sentence transformers? Appreciate your advice
-
Comparing BERTopic to human raters
Most has already been said and I am not sure how relevant this is but since you are focusing on human raters it might be worthwhile to mention that there is a Pull Request in BERTopic that allows you to use models on top of the default pipeline that further fine-tunes the topic representation. In theory, this would allow you to even use ChatGPT or any of the other OpenAI models to label the topics. From a human annotator perspective, this might be interesting to pursue.
-
text clustering with XLNET, ROBERTA, ELMO and other pretrained models
The BERTopic library allows you to plug and play any type of embedding.
- How can I group domain specific keywords based on their word embeddings?
-
Introducing the Semantic Graph
A number of excellent topic modeling libraries exist in Python today. BERTopic and Top2Vec are two of the most popular. Both use sentence-transformers to encode data into vectors, UMAP for dimensionality reduction and HDBSCAN to cluster nodes.
-
Classifying unstructured text: sentences, phrases, lists of words
BERTopic is a library to consider if you want something that groups data by topic.
-
[D] How to best extract product benefits/problems from customer reviews using NLP?
I have experimented a bit with BERTopic but didn't find the results very useful. The issue is, that it is very important what exactly people are liking or disliking about the products, not just the fact that they are talking about specific aspects.
- Classify texts using known categories, NLP
What are some alternatives?
OCTIS - OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)
Top2Vec - Top2Vec learns jointly embedded topic, document and word vectors.
PolyFuzz - Fuzzy string matching, grouping, and evaluation.
gensim - Topic Modelling for Humans
tika-python - Tika-Python is a Python binding to the Apache Tika™ REST services allowing Tika to be called natively in the Python community.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
GuidedLDA - semi supervised guided topic model with custom guidedLDA
PyABSA - Sentiment Analysis, Text Classification, Text Augmentation, Text Adversarial defense, etc.;
Sentimentanalysis - Language independent sentiment analysis
scattertext - Beautiful visualizations of how language differs among document types.