scattertext
lit
Our great sponsors
scattertext | lit | |
---|---|---|
3 | 3 | |
2,196 | 3,374 | |
- | 1.5% | |
4.7 | 9.3 | |
about 1 month ago | 8 days ago | |
Python | TypeScript | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
scattertext
-
Clustering of text - Where to start?
If what you want is to determine how similar two categories are, or to learn something about the structure or words that compose those categories, you might consider word shift graphs or Scattertext.
- [Data] Principali parole degli ultimi (circa) 200 post sul sub
-
Alternate approaches to TF-IDF?
Other suggestions: Take a look at Scattertext. Compare keywords to the problem of aspect extraction. I think an underutilized way to look at textual data when you have a single group of interest is the word-frequency-based odds ratio.
lit
-
How to create a broad/representative sample from millions of records?
I'd also suggest looking at your data sample, and how your model handles it, with some kind of exploratory analysis tool. Google's Language Interpretability Tool might work for your scenario. This can give you a lot of ideas about how to prepare the data better.
-
AWS - NLP newsletter November 2021
Visualize and understand NLP models with the Language Interpretability Tool The Language Interpretability Tool (LIT) is for researchers and practitioners looking to understand NLP model behavior through a visual, interactive, and extensible tool. Use LIT to ask and answer questions like: What kind of examples does my model perform poorly on? Why did my model make this prediction? Can it attribute it to adversarial behavior, or undesirable priors from the training set? Does my model behave consistently if I change things like textual style, verb tense, or pronoun gender? LIT contains many built-in capabilities but is also customizable, with the ability to add custom interpretability techniques, metrics calculations, counterfactual generators, visualizations, and more.
-
Are there any tools for seeing / understanding what a fine-tuned BERT model is looking at for a downstream task?
Use LIT https://github.com/PAIR-code/lit
What are some alternatives?
BERTopic - Leveraging BERT and c-TF-IDF to create easily interpretable topics.
bertviz - BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
KeyBERT - Minimal keyword extraction with BERT
amazon-sagemaker-examples - Example 📓 Jupyter notebooks that demonstrate how to build, train, and deploy machine learning models using 🧠Amazon SageMaker.
word_cloud - A little word cloud generator in Python
stopwords-it - Italian stopwords collection
shifterator - Interpretable data visualizations for understanding how texts differ at the word level
yake - Single-document unsupervised keyword extraction
faiss - A library for efficient similarity search and clustering of dense vectors.
dutch-word-embeddings - Dutch word embeddings, trained on a large collection of Dutch social media messages and news/blog/forum posts.
texthero - Text preprocessing, representation and visualization from zero to hero.
gensim - Topic Modelling for Humans