scattertext
BERTopic
Our great sponsors
scattertext | BERTopic | |
---|---|---|
3 | 22 | |
2,197 | 5,543 | |
- | - | |
4.7 | 8.2 | |
about 2 months ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
scattertext
-
Clustering of text - Where to start?
If what you want is to determine how similar two categories are, or to learn something about the structure or words that compose those categories, you might consider word shift graphs or Scattertext.
- [Data] Principali parole degli ultimi (circa) 200 post sul sub
-
Alternate approaches to TF-IDF?
Other suggestions: Take a look at Scattertext. Compare keywords to the problem of aspect extraction. I think an underutilized way to look at textual data when you have a single group of interest is the word-frequency-based odds ratio.
BERTopic
-
how can a top2vec output be improved
Try experimenting with different hyperparameters, clustering algorithms and embedding representations. Try https://github.com/MaartenGr/BERTopic/tree/master/bertopic
-
SBERT Embeddings from Conversations
Try out this notebook which comes with the BERTopic repository.
-
Sentence transformers (BERTopic) on a Macbook Air
After some googling, I found this (but for M1 chip Mac) --I wonder if I'm stuck. Is this laptop just not up for the job of working with sentence transformers? Appreciate your advice
-
Comparing BERTopic to human raters
Most has already been said and I am not sure how relevant this is but since you are focusing on human raters it might be worthwhile to mention that there is a Pull Request in BERTopic that allows you to use models on top of the default pipeline that further fine-tunes the topic representation. In theory, this would allow you to even use ChatGPT or any of the other OpenAI models to label the topics. From a human annotator perspective, this might be interesting to pursue.
-
text clustering with XLNET, ROBERTA, ELMO and other pretrained models
The BERTopic library allows you to plug and play any type of embedding.
- How can I group domain specific keywords based on their word embeddings?
-
Introducing the Semantic Graph
A number of excellent topic modeling libraries exist in Python today. BERTopic and Top2Vec are two of the most popular. Both use sentence-transformers to encode data into vectors, UMAP for dimensionality reduction and HDBSCAN to cluster nodes.
-
Classifying unstructured text: sentences, phrases, lists of words
BERTopic is a library to consider if you want something that groups data by topic.
-
[D] How to best extract product benefits/problems from customer reviews using NLP?
I have experimented a bit with BERTopic but didn't find the results very useful. The issue is, that it is very important what exactly people are liking or disliking about the products, not just the fact that they are talking about specific aspects.
- Classify texts using known categories, NLP
What are some alternatives?
KeyBERT - Minimal keyword extraction with BERT
Top2Vec - Top2Vec learns jointly embedded topic, document and word vectors.
stopwords-it - Italian stopwords collection
gensim - Topic Modelling for Humans
word_cloud - A little word cloud generator in Python
OCTIS - OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)
shifterator - Interpretable data visualizations for understanding how texts differ at the word level
GuidedLDA - semi supervised guided topic model with custom guidedLDA
lit - The Learning Interpretability Tool: Interactively analyze ML models to understand their behavior in an extensible and framework agnostic interface.
contextualized-topic-models - A python package to run contextualized topic modeling. CTMs combine contextualized embeddings (e.g., BERT) with topic models to get coherent topics. Published at EACL and ACL 2021.
yake - Single-document unsupervised keyword extraction
PyABSA - Sentiment Analysis, Text Classification, Text Augmentation, Text Adversarial defense, etc.;