fastT5
OpenSeeFace
fastT5 | OpenSeeFace | |
---|---|---|
5 | 7 | |
540 | 1,318 | |
- | - | |
0.0 | 4.2 | |
about 1 year ago | 3 months ago | |
Python | Python | |
Apache License 2.0 | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
fastT5
-
Speeding up T5
I've tried https://github.com/Ki6an/fastT5 but it works with CPU only.
-
Convert Pegasus model to ONNX
I am working on a project where I fine-tuned a Pegasus model on the Reddit dataset. Now, I need to convert the fine-tuned model to ONNX for the deployment stage. I have followed this guide from Huggingface to convert to the ONNX model for unsupported architects. I got it done but the ONNX model can't generate text. Turned out that Pegasus is an encoder-decoder model and most guides are for either encoder-model (e.g. BERT) or decoder-model (e.g. GPT2). I found the only example of converting an encoder-decoder model to ONNX from here https://github.com/Ki6an/fastT5.
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
Microsoft Onnx Runtime T5 export tool / FastT5: to support caching, it exports 2 times the decoder part, one with cache, and one without (for the first generated token). So the memory footprint is doubled, which makes the solution difficult to use for these large transformer models.
-
Conceptually, what are the "Past key values" in the T5 Decoder?
Here is the fastT5 model code for reference code:https://github.com/Ki6an/fastT5/blob/master/fastT5/onnx_models.py
-
[P] boost T5 models speed up to 5x & reduce the model size by 3x using fastT5.
for more information on the project refer to the repository here.
OpenSeeFace
-
Getting face feature pose statistics
I got something working modifying OpenSeeFace and it's an option and I might try to rewrite it in something compiled, but I'd like to look at the other options first.
-
This may be a silly question but can I hire someone to make me a customized avatar for vr chat?
Lastly, face tracking is either built in or uses a plugin device. You would also use OSC to manipulate blendshapes. I'd take a look at Opens Face.
-
Any recommendations for VTuber setup on Linux? Ideally something that's completely open source.
Th best/only options I found were basically the old standby of Wine running closed source applications. Even then the end result was rather incomplete... because may of the best tracking options are simply closed source. (Note: Vseeface does offer an open source library)
- Running OpenSeeFace on Linux with python 3.10
-
Open map with gaze tracking for someone with paralysis
There are only a few libraries that come to mind but take a bit of work to get started. MediaPipe Unity Plugin has eye tracking with a whole lot of types of tracking(head, hands, body). OpenSeeFace has models that do head and eye tracking. This repo uses Unity's neural net inference library, Barracuda, to run a MediaPipe iris landmark model (I haven't personally tested this library). Not sure how to translate eye landmarks to the position a player is looking at in a screen though. Hopefully this list of libraries gets you on the right path!
-
I'm going to try to use VSeeFace again, is there a way to change how lip syncing works?
OpenSeeFace is open source. If you are using it with VSeeFace, you can just replace the Binary folder with your own build.
-
I'm making a renderer for facetracking data
It uses OpenSeeFace for facetracking and engine patches/vrm code from the V-Sekai team.
What are some alternatives?
Questgen.ai - Question generation using state-of-the-art Natural Language Processing algorithms
openseeface-gd - A GUI for running OpenSeeFace.
mt5-M2M-comparison - Comparing M2M and mT5 on a rare language pairs, blog post: https://medium.com/@abdessalemboukil/comparing-facebooks-m2m-to-mt5-in-low-resources-translation-english-yoruba-ef56624d2b75
kalidokit - Blendshape and kinematics calculator for Mediapipe/Tensorflow.js Face, Eyes, Pose, and Finger tracking models.
json-translate - Translate json files with DeepL or AWS
UniVRM - UniVRM is a gltf-based VRM format implementation for Unity. English is here https://vrm.dev/en/ . 日本語 はこちら https://vrm.dev/
frame-semantic-transformer - Frame Semantic Parser based on T5 and FrameNet
vpuppr - VTuber application made with Godot 4
FasterTransformer - Transformer related optimization, including BERT, GPT
VTuber_Unity - Use Unity 3D character and Python deep learning algorithms to stream as a VTuber!
sparktorch - Train and run Pytorch models on Apache Spark.
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀