contextualized-topic-models
transformers
Our great sponsors
contextualized-topic-models | transformers | |
---|---|---|
7 | 171 | |
1,151 | 122,577 | |
1.0% | 2.7% | |
5.0 | 10.0 | |
2 months ago | 7 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
contextualized-topic-models
-
[Project]Topic modelling of tweets from the same user
In our experiments, CTM works well with tweets: https://github.com/MilaNLProc/contextualized-topic-models (I'm one of the authors)
-
Using Transformer for Topic Modeling - what are the options?
This library from MILA seems quite neat! I havenโt had the change to play with it though : https://github.com/MilaNLProc/contextualized-topic-models
-
(NLP) Best practices for topic modeling and generating interesting topics?
If you use CTM, you can provide the topic model two inputs: the preprocessed texts (that will be used by the topic model to generate the topical words) and the unpreprocessed texts (to generate the contextualized representations that will be later concatenated to the document bag-of-word representation). We saw that this slightly improves the performance instead of providing BERT the already-preprocessed text. This feature is supported in the original implementation of CTM, not in OCTIS. See here: https://github.com/MilaNLProc/contextualized-topic-models#combined-topic-model
My team and I have recently released a python library called OCTIS (https://github.com/mind-Lab/octis) that allows you to automatically optimize the hyperparameters of a topic model according to a given evaluation metric (not log-likelihood). I guess, in your case, you might be interested in topic coherence. So you will get good quality topics with a low effort on the choice of the hyperparameters. Also, we included some state-of-the-art topic models, e.g. contextualized topic models (https://github.com/MilaNLProc/contextualized-topic-models).
-
Latest trends in topic modelling?
Cross-lingual Contextualized Topic Models with Zero-shot Learning from a team at MilaNLP which uses bag of words representations in combination with multi lingual embeddings from SBERT and works like a VAE (encode the input, use the encoded representation to decode back to a bag of words as close to the input as possible). Using SBERT embeddings makes their model generalise for other languages which may be useful. One major shortfall of this model as I understand is that it can't deal with long documents very elegantly - only up to BERT'S word limit (the workaround is to truncate and use the first words)
transformers
-
Gemma doesn't suck anymore โ 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
[D] What is a good way to maintain code readability and code quality while scaling up complexity in libraries like Hugging Face?
In transformers, they tried really hard to have a single function or method to deal with both self and cross attention mechanisms, masking, positional and relative encodings, interpolation etc. While it allows a user to use the same function/method for any model, it has led to severe parameter bloat. Just compare the original implementation of llama by FAIR with the implementation by HF to get an idea.
-
Self train a super tiny model recommendations
You can train it with the code provided in transformer repo: https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
transformers uses accelerate if you call it with device_map='auto'
-
Show HN: Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context
Too much money being thrown around on BS in the LLM space, hardly any of it is going to places where it matters.
For example, the researchers working hard on better text sampling techniques, or on better constraint techniques (i.e. like this https://arxiv.org/abs/2306.03081), or on actual negative prompting/CFG in LLMs (i.e. like this https://github.com/huggingface/transformers/issues/24536) are doing far FAR more to advance the state of AI than dozens of VC backed LLM "prompt engineering" companies operating today.
HN, and the NLP community have some serious blindspots with knowing how to exploit their own technology. At least someone at Anderson Howartz got a clue and gave some funding to Oogabooga - still waiting for Automatic1111 to get any funding.
-
๐๐ 23 issues to grow yourself as an exceptional open-source Python expert ๐งโ๐ป ๐ฅ
Repo : https://github.com/huggingface/transformers
-
Whisper prompt tuning
From what I know, Whisper already supports prompting (https://github.com/huggingface/transformers/pull/22496). Can I somehow freeze the whole model and tune exclusively the prompt or would I need to write an implementation from scratch?
-
A look at Appleโs new Transformer-powered predictive text model
https://github.com/huggingface/transformers/blob/0a55d9f7376...
To summarize how they work: you keep some number of previously generated tokens, and once you get logits that you want to sample a new token from, you find the logits for existing tokens and multiply them by a penalty, thus lowering the probability of the corresponding tokens.
-
Can LLMs learn from a single example?
Very cool. This came up in a huggingface transformers issue a while ago and we also determined memorization to be the likely reason. It's nice to see someone else reach the same conclusion.
What are some alternatives?
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
llama - Inference code for Llama models
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
huggingface_hub - The official Python client for the Huggingface Hub.
BERTopic - Leveraging BERT and c-TF-IDF to create easily interpretable topics.
OpenNMT-py - Open Source Neural Machine Translation and (Large) Language Models in PyTorch
sentencepiece - Unsupervised text tokenizer for Neural Network-based text generation.
Swin-Transformer-Tensorflow - Unofficial implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" (https://arxiv.org/abs/2103.14030)
faiss - A library for efficient similarity search and clustering of dense vectors.
KoboldAI-Client