fastT5
onnxruntime
fastT5 | onnxruntime | |
---|---|---|
5 | 58 | |
540 | 12,960 | |
- | 4.4% | |
0.0 | 10.0 | |
about 1 year ago | 4 days ago | |
Python | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
fastT5
-
Speeding up T5
I've tried https://github.com/Ki6an/fastT5 but it works with CPU only.
-
Convert Pegasus model to ONNX
I am working on a project where I fine-tuned a Pegasus model on the Reddit dataset. Now, I need to convert the fine-tuned model to ONNX for the deployment stage. I have followed this guide from Huggingface to convert to the ONNX model for unsupported architects. I got it done but the ONNX model can't generate text. Turned out that Pegasus is an encoder-decoder model and most guides are for either encoder-model (e.g. BERT) or decoder-model (e.g. GPT2). I found the only example of converting an encoder-decoder model to ONNX from here https://github.com/Ki6an/fastT5.
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
Microsoft Onnx Runtime T5 export tool / FastT5: to support caching, it exports 2 times the decoder part, one with cache, and one without (for the first generated token). So the memory footprint is doubled, which makes the solution difficult to use for these large transformer models.
-
Conceptually, what are the "Past key values" in the T5 Decoder?
Here is the fastT5 model code for reference code:https://github.com/Ki6an/fastT5/blob/master/fastT5/onnx_models.py
-
[P] boost T5 models speed up to 5x & reduce the model size by 3x using fastT5.
for more information on the project refer to the repository here.
onnxruntime
-
Machine Learning with PHP
ONNX Runtime: ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
-
AI Inference now available in Supabase Edge Functions
Embedding generation uses the ONNX runtime under the hood. This is a cross-platform inferencing library that supports multiple execution providers from CPU to specialized GPUs.
-
Deep Learning in JavaScript
tfjs is dead, looking at the commit history. The standard now is to convert PyTorch to onnx, then use onnxruntime (https://github.com/microsoft/onnxruntime/tree/main/js/web) to run the model on the browsdr.
- FLaNK Stack 05 Feb 2024
-
Vcc – The Vulkan Clang Compiler
- slang[2] has the potential, but the meta programming part is not as strong as C++, existing libraries cannot be used.
The above conclusion is drawn from my work https://github.com/microsoft/onnxruntime/tree/dev/opencl, purely nightmare to work with thoes drivers and jit compilers. Hopefully Vcc can take compute shader more seriously.
[1]: https://www.circle-lang.org/
-
Oracle-samples/sd4j: Stable Diffusion pipeline in Java using ONNX Runtime
I did. It depends what you want, for an overview of how ONNX Runtime works then Microsoft have a bunch of things on https://onnxruntime.ai, but the Java content is a bit lacking on there as I've not had time to write much. Eventually I'll probably write something similar to the C# SD tutorial they have on there but for the Java API.
For writing ONNX models from Java we added an ONNX export system to Tribuo in 2022 which can be used by anything on the JVM to export ONNX models in an easier way than writing a protobuf directly. Tribuo doesn't have full coverage of the ONNX spec, but we're happy to accept PRs to expand it, otherwise it'll fill out as we need it.
- Mamba-Chat: A Chat LLM based on State Space Models
-
VectorDB: Vector Database Built by Kagi Search
What about models besides GPT? Most of the popular vector encoding models aren't using this architecture.
If you really didn't want PyTorch/Transformers, you could consider exporting your models to ONNX (https://github.com/microsoft/onnxruntime).
- ONNX runtime: Cross-platform accelerated machine learning
- Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
What are some alternatives?
Questgen.ai - Question generation using state-of-the-art Natural Language Processing algorithms
onnx - Open standard for machine learning interoperability
mt5-M2M-comparison - Comparing M2M and mT5 on a rare language pairs, blog post: https://medium.com/@abdessalemboukil/comparing-facebooks-m2m-to-mt5-in-low-resources-translation-english-yoruba-ef56624d2b75
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
json-translate - Translate json files with DeepL or AWS
onnx-simplifier - Simplify your onnx model
frame-semantic-transformer - Frame Semantic Parser based on T5 and FrameNet
ONNX-YOLOv7-Object-Detection - Python scripts performing object detection using the YOLOv7 model in ONNX.
OpenSeeFace - Robust realtime face and facial landmark tracking on CPU with Unity integration
onnx-tensorflow - Tensorflow Backend for ONNX
FasterTransformer - Transformer related optimization, including BERT, GPT
MLflow - Open source platform for the machine learning lifecycle