simpleT5
reformer-pytorch
simpleT5 | reformer-pytorch | |
---|---|---|
2 | 2 | |
381 | 2,052 | |
- | - | |
2.5 | 1.8 | |
12 months ago | 11 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
simpleT5
-
Transformers: How to compare performance to base model?
Currently I just took ~42000 samples and trained a translation task directly on codeT5 with https://github.com/Shivanandroy/simpleT5. Validation loss and at least the qualitative results are not to bad. Im now going to try to compare it to the base codeT5-model with the *.loss function as suggested above.
-
[P] SimpleT5 : Train T5 models in just 3 lines of code
🌟GitHub: https://github.com/Shivanandroy/simpleT5 🌟Medium: https://snrspeaks.medium.com/simplet5-train-t5-models-in-just-3-lines-of-code-by-shivanand-roy-2021-354df5ae46ba 🌟Colab Notebook: https://colab.research.google.com/drive/1JZ8v9L0w0Ai3WbibTeuvYlytn0uHMP6O?usp=sharing
reformer-pytorch
-
[D] How to do Long Text ( > 10k tokens) Summarization?
The lucidrains implementation of Reformer can handle tens of thousands of tokens on Google Colab (with batch size 1).
-
[R]How to go about non-reproducible research?
This is what I call great code : https://github.com/lucidrains/reformer-pytorch
What are some alternatives?
datatap-python - Focus on Algorithm Design, Not on Data Wrangling
vit-pytorch - Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
ModelZoo.pytorch - Hands on Imagenet training. Unofficial ModelZoo project on Pytorch. MobileNetV3 Top1 75.64🌟 GhostNet1.3x 75.78🌟
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
frame-semantic-transformer - Frame Semantic Parser based on T5 and FrameNet
Fast-Transformer - An implementation of Fastformer: Additive Attention Can Be All You Need, a Transformer Variant in TensorFlow
KeyPhraseTransformer - KeyPhraseTransformer lets you quickly extract key phrases, topics, themes from your text data with T5 transformer | Keyphrase extraction | Keyword extraction
LSTM-FCN - Codebase for the paper LSTM Fully Convolutional Networks for Time Series Classification
TencentPretrain - Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Conformer - An implementation of Conformer: Convolution-augmented Transformer for Speech Recognition, a Transformer Variant in TensorFlow/Keras
fastT5 - ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
DeepPoseKit - a toolkit for pose estimation using deep learning