FARM
bertviz
FARM | bertviz | |
---|---|---|
3 | 15 | |
1,723 | 6,398 | |
0.3% | - | |
0.0 | 3.9 | |
4 months ago | 8 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FARM
-
Can someone please explain to me the differences between train, dev and test datasets?
I'm also trying to solve this task in a python notebook (.ipynb) using the FARM framework https://farm.deepset.ai/ and BERT model of huggingface https://huggingface.co/bert-base-uncased
-
Fine-Tuning Transformers for NLP
For anyone looking to fine-train transformers with less work, there is the FARM project (https://github.com/deepset-ai/FARM) which has some more or less ready-to-go configurations (classification, question answering, NER, and a couple of others). It's really almost "plug in a csv and run".
By the way, a pet peeve is sentiment detection. It's a useful method, but please be aware that it does not measure "sentiment" in a way that one would normally think, and that what it measure varies strongly across methods (https://www.tandfonline.com/doi/abs/10.1080/19312458.2020.18...).
-
Has anyone deployed a BERT like model across multiple tasks (Multi-class, NER, outlier detection)? Seeking advice.
You can use https://github.com/deepset-ai/FARM or https://github.com/nyu-mll/jiant for multitask learning. The second is more general.
bertviz
-
StreamingLLM: tiny tweak to KV LRU improves long conversations
This seems only to work cause large GPTs have redundant, undercomplex attentions. See this issue in BertViz about attention in Llama: https://github.com/jessevig/bertviz/issues/128
-
[D] Is there a tool that indicates which parts of the input prompt impact the LLM's output the most?
https://github.com/jessevig/bertviz this could be helpful .. I was playing around with it a while ago to see how the attention weights are distributed across prompts
-
Show HN: Fully client-side GPT2 prediction visualizer
It would be interesting to have attention visualized as well, similar to how it's done in BertViz:
https://github.com/jessevig/bertviz
-
How to visualise LLMs ?
link for lazy: https://github.com/jessevig/bertviz
-
Ask HN: Can someone ELI5 Transformers and the “Attention is all we need” paper
The Illustrated Transfomer ( https://jalammar.github.io/illustrated-transformer/ ) and Visualizing attention ( https://towardsdatascience.com/deconstructing-bert-part-2-vi... ), are both really good resources. For a more ELI5 approach this non-technical explainer ( https://www.parand.com/a-non-technical-explanation-of-chatgp... ) covers it at a high level.
- Perplexity.ai Prompt Leakage
-
[Discussion] is attention an explanation?
You can get some information this way, but not everything you would want to know. You can try it yourself with BertViz.
-
using bert for relation extraction
2) BERT learns a lot in its embeddings: the BERTOLOGY paper (https://arxiv.org/abs/2002.12327) provides a great in-depth look at some of the broader linguistic traits that BERT learns. Different layers often learn different patterns, so the embeddings aren't really interpretable, but you can use something like bertviz (https://github.com/jessevig/bertviz) to explore attention weights across layers for predetermined examples
-
Maintaining context vs. overloading your Replika
I messed up a few things and mixed a couple others, anyways this site has a lot of decent information about it. https://towardsdatascience.com/deconstructing-bert-part-2-visualizing-the-inner-workings-of-attention-60a16d86b5c1
-
[D] code to visualize attention heads
Big fan of BertViz for this, widely used in research for this very purpose: https://github.com/jessevig/bertviz
What are some alternatives?
Giveme5W1H - Extraction of the journalistic five W and one H questions (5W1H) from news articles: who did what, when, where, why, and how?
ecco - Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0).
Questgen.ai - Question generation using state-of-the-art Natural Language Processing algorithms
BERT-pytorch - Google AI 2018 BERT pytorch implementation
haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
happy-transformer - Happy Transformer makes it easy to fine-tune and perform inference with NLP Transformer models.
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
BERT-NER - Pytorch-Named-Entity-Recognition-with-BERT
DeBERTa - The implementation of DeBERTa
tldr-transformers - The "tl;dr" on a few notable transformer papers (pre-2022).
tf-transformers - State of the art faster Transformer with Tensorflow 2.0 ( NLP, Computer Vision, Audio ).