FARM
happy-transformer
FARM | happy-transformer | |
---|---|---|
3 | 1 | |
1,723 | 500 | |
0.3% | - | |
0.0 | 9.0 | |
4 months ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FARM
-
Can someone please explain to me the differences between train, dev and test datasets?
I'm also trying to solve this task in a python notebook (.ipynb) using the FARM framework https://farm.deepset.ai/ and BERT model of huggingface https://huggingface.co/bert-base-uncased
-
Fine-Tuning Transformers for NLP
For anyone looking to fine-train transformers with less work, there is the FARM project (https://github.com/deepset-ai/FARM) which has some more or less ready-to-go configurations (classification, question answering, NER, and a couple of others). It's really almost "plug in a csv and run".
By the way, a pet peeve is sentiment detection. It's a useful method, but please be aware that it does not measure "sentiment" in a way that one would normally think, and that what it measure varies strongly across methods (https://www.tandfonline.com/doi/abs/10.1080/19312458.2020.18...).
-
Has anyone deployed a BERT like model across multiple tasks (Multi-class, NER, outlier detection)? Seeking advice.
You can use https://github.com/deepset-ai/FARM or https://github.com/nyu-mll/jiant for multitask learning. The second is more general.
happy-transformer
-
GPT-Neo-125M-AID (Mia) oversight + retrained
This appears to be an actual issue with Happy Transformer judging by a GitHub issue I've found of the same problem.
What are some alternatives?
Giveme5W1H - Extraction of the journalistic five W and one H questions (5W1H) from news articles: who did what, when, where, why, and how?
transformers-interpret - Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
bertviz - BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
FinBERT-QA - Financial Domain Question Answering with pre-trained BERT Language Model
Questgen.ai - Question generation using state-of-the-art Natural Language Processing algorithms
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
small-text - Active Learning for Text Classification in Python
BERT-NER - Pytorch-Named-Entity-Recognition-with-BERT
gector - Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite" (BEA-20) and "Text Simplification by Tagging" (BEA-21)
tldr-transformers - The "tl;dr" on a few notable transformer papers (pre-2022).
quickai - QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.