parallelformers
fastT5
Our great sponsors
parallelformers | fastT5 | |
---|---|---|
3 | 5 | |
748 | 540 | |
0.8% | - | |
0.0 | 0.0 | |
about 1 year ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
parallelformers
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
What about integrating with https://github.com/tunib-ai/parallelformers ?
-
How to speed up inference of your Transformer-based NLP models?
Check out the parallelformers library too - it's in active development, I've used it successfully. https://github.com/tunib-ai/parallelformers
-
[P] Parallelformers: An Efficient Model Parallelization Toolkit for Deployment
Hello, I am writing to inform you about the release of Parallelformers (https://github.com/tunib-ai/parallelformers), a model parallelization library at TUNiB. Parallelformers is a toolkit that supports inference parallelism for 68 models in Huggingface Transformers with 1 line of code.
fastT5
-
Speeding up T5
I've tried https://github.com/Ki6an/fastT5 but it works with CPU only.
-
Convert Pegasus model to ONNX
I am working on a project where I fine-tuned a Pegasus model on the Reddit dataset. Now, I need to convert the fine-tuned model to ONNX for the deployment stage. I have followed this guide from Huggingface to convert to the ONNX model for unsupported architects. I got it done but the ONNX model can't generate text. Turned out that Pegasus is an encoder-decoder model and most guides are for either encoder-model (e.g. BERT) or decoder-model (e.g. GPT2). I found the only example of converting an encoder-decoder model to ONNX from here https://github.com/Ki6an/fastT5.
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
Microsoft Onnx Runtime T5 export tool / FastT5: to support caching, it exports 2 times the decoder part, one with cache, and one without (for the first generated token). So the memory footprint is doubled, which makes the solution difficult to use for these large transformer models.
-
Conceptually, what are the "Past key values" in the T5 Decoder?
Here is the fastT5 model code for reference code:https://github.com/Ki6an/fastT5/blob/master/fastT5/onnx_models.py
-
[P] boost T5 models speed up to 5x & reduce the model size by 3x using fastT5.
for more information on the project refer to the repository here.
What are some alternatives?
FasterTransformer - Transformer related optimization, including BERT, GPT
Questgen.ai - Question generation using state-of-the-art Natural Language Processing algorithms
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
mt5-M2M-comparison - Comparing M2M and mT5 on a rare language pairs, blog post: https://medium.com/@abdessalemboukil/comparing-facebooks-m2m-to-mt5-in-low-resources-translation-english-yoruba-ef56624d2b75
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
json-translate - Translate json files with DeepL or AWS
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
frame-semantic-transformer - Frame Semantic Parser based on T5 and FrameNet
OpenSeeFace - Robust realtime face and facial landmark tracking on CPU with Unity integration
sparktorch - Train and run Pytorch models on Apache Spark.