simpleT5
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models. (by Shivanandroy)
fastT5
⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x. (by Ki6an)
simpleT5 | fastT5 | |
---|---|---|
2 | 5 | |
381 | 540 | |
- | - | |
2.5 | 0.0 | |
12 months ago | about 1 year ago | |
Python | Python | |
MIT License | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
simpleT5
Posts with mentions or reviews of simpleT5.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Transformers: How to compare performance to base model?
Currently I just took ~42000 samples and trained a translation task directly on codeT5 with https://github.com/Shivanandroy/simpleT5. Validation loss and at least the qualitative results are not to bad. Im now going to try to compare it to the base codeT5-model with the *.loss function as suggested above.
-
[P] SimpleT5 : Train T5 models in just 3 lines of code
🌟GitHub: https://github.com/Shivanandroy/simpleT5 🌟Medium: https://snrspeaks.medium.com/simplet5-train-t5-models-in-just-3-lines-of-code-by-shivanand-roy-2021-354df5ae46ba 🌟Colab Notebook: https://colab.research.google.com/drive/1JZ8v9L0w0Ai3WbibTeuvYlytn0uHMP6O?usp=sharing
fastT5
Posts with mentions or reviews of fastT5.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-01-22.
-
Speeding up T5
I've tried https://github.com/Ki6an/fastT5 but it works with CPU only.
-
Convert Pegasus model to ONNX
I am working on a project where I fine-tuned a Pegasus model on the Reddit dataset. Now, I need to convert the fine-tuned model to ONNX for the deployment stage. I have followed this guide from Huggingface to convert to the ONNX model for unsupported architects. I got it done but the ONNX model can't generate text. Turned out that Pegasus is an encoder-decoder model and most guides are for either encoder-model (e.g. BERT) or decoder-model (e.g. GPT2). I found the only example of converting an encoder-decoder model to ONNX from here https://github.com/Ki6an/fastT5.
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
Microsoft Onnx Runtime T5 export tool / FastT5: to support caching, it exports 2 times the decoder part, one with cache, and one without (for the first generated token). So the memory footprint is doubled, which makes the solution difficult to use for these large transformer models.
-
Conceptually, what are the "Past key values" in the T5 Decoder?
Here is the fastT5 model code for reference code:https://github.com/Ki6an/fastT5/blob/master/fastT5/onnx_models.py
-
[P] boost T5 models speed up to 5x & reduce the model size by 3x using fastT5.
for more information on the project refer to the repository here.