electra
iSarcasmEval
electra | iSarcasmEval | |
---|---|---|
3 | 1 | |
2,296 | 19 | |
0.7% | - | |
0.0 | 10.0 | |
about 1 month ago | over 1 year ago | |
Python | ||
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
electra
-
Fine-tuned model consistently producing Precision and Recall scores of 0 from start of training, any suggestions on how to improve?
If this is your own implementation of ELECTRA, hopefully you have previous versions you've demonstrated working, you could revert back to a working version, then apply the changes you made one-by-one. If it's open-source code you are using, such as this one, try and find a working example, run it yourself, carefully modify it, preserve it in a working (high performance) state, change it piece-by-piece until it works on your problem.
-
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators Web Demo
github: https://github.com/google-research/electra
-
Help with aligned word embeddings
If you have at least a decent gaming gpu or want to bother with colab, you could get a relevant dataset and use electra https://github.com/google-research/electra
iSarcasmEval
-
Fine-tuned model consistently producing Precision and Recall scores of 0 from start of training, any suggestions on how to improve?
The labels are extracted and put into their own df which is then fed alongside the text data as tensors to the model. The observations for each class are fairly low due to it being a small but thorough dataset defined and labelled specifically for these tasks, so I can't really change it. However I have been wondering whether I should just generally train the model on sarcasm detection first using a Kaggle dataset or something, then fine tuning again for this subtask (B in the link).
What are some alternatives?
clip-as-service - 🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
pragmatapro - PragmataPro font is designed to help pros to work better
stanford-tensorflow-tutorials - This repository contains code examples for the Stanford's course: TensorFlow for Deep Learning Research.
BERTweet - BERTweet: A pre-trained language model for English Tweets (EMNLP-2020)
LASER - Language-Agnostic SEntence Representations
Arabic-Handwritten-Images-Recognition - A deep learning model to classify the Arabic letters and digits easily.
MUSE - A library for Multilingual Unsupervised or Supervised word Embeddings
anees-dataset - The dataset used to fine-tune the GPT-2 model used in Anees for the multi-turn dialogue generation.
datasets - 🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.