Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
fastT5 Alternatives
Similar projects and alternatives to fastT5
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
-
transformer-deploy
Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
-
mt5-M2M-comparison
Comparing M2M and mT5 on a rare language pairs, blog post: https://medium.com/@abdessalemboukil/comparing-facebooks-m2m-to-mt5-in-low-resources-translation-english-yoruba-ef56624d2b75
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
fastT5 reviews and mentions
-
Speeding up T5
I've tried https://github.com/Ki6an/fastT5 but it works with CPU only.
-
Convert Pegasus model to ONNX
I am working on a project where I fine-tuned a Pegasus model on the Reddit dataset. Now, I need to convert the fine-tuned model to ONNX for the deployment stage. I have followed this guide from Huggingface to convert to the ONNX model for unsupported architects. I got it done but the ONNX model can't generate text. Turned out that Pegasus is an encoder-decoder model and most guides are for either encoder-model (e.g. BERT) or decoder-model (e.g. GPT2). I found the only example of converting an encoder-decoder model to ONNX from here https://github.com/Ki6an/fastT5.
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
Microsoft Onnx Runtime T5 export tool / FastT5: to support caching, it exports 2 times the decoder part, one with cache, and one without (for the first generated token). So the memory footprint is doubled, which makes the solution difficult to use for these large transformer models.
-
Conceptually, what are the "Past key values" in the T5 Decoder?
Here is the fastT5 model code for reference code:https://github.com/Ki6an/fastT5/blob/master/fastT5/onnx_models.py
-
[P] boost T5 models speed up to 5x & reduce the model size by 3x using fastT5.
for more information on the project refer to the repository here.
-
A note from our sponsor - InfluxDB
www.influxdata.com | 10 May 2024
Stats
Ki6an/fastT5 is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of fastT5 is Python.
Sponsored