|over 1 year ago||about 1 month ago|
|GNU General Public License v3.0 only||BSD 2-clause "Simplified" License|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents. Details on comments
1 project | reddit.com/r/MediaSynthesis | 25 Feb 2021
Any recommendations for VTuber setup on Linux? Ideally something that's completely open source.
1 project | reddit.com/r/opensource | 4 Jul 2022
Th best/only options I found were basically the old standby of Wine running closed source applications. Even then the end result was rather incomplete... because may of the best tracking options are simply closed source. (Note: Vseeface does offer an open source library)
Running OpenSeeFace on Linux with python 3.10
3 projects | reddit.com/r/VirtualYoutubers | 22 Jun 2022
Open map with gaze tracking for someone with paralysis
3 projects | reddit.com/r/Unity3D | 11 Jun 2022
There are only a few libraries that come to mind but take a bit of work to get started. MediaPipe Unity Plugin has eye tracking with a whole lot of types of tracking(head, hands, body). OpenSeeFace has models that do head and eye tracking. This repo uses Unity's neural net inference library, Barracuda, to run a MediaPipe iris landmark model (I haven't personally tested this library). Not sure how to translate eye landmarks to the position a player is looking at in a screen though. Hopefully this list of libraries gets you on the right path!
I'm going to try to use VSeeFace again, is there a way to change how lip syncing works?
1 project | reddit.com/r/VirtualYoutubers | 1 Mar 2022
OpenSeeFace is open source. If you are using it with VSeeFace, you can just replace the Binary folder with your own build.
I'm making a renderer for facetracking data
3 projects | reddit.com/r/godot | 23 Feb 2021
It uses OpenSeeFace for facetracking and engine patches/vrm code from the V-Sekai team.
What are some alternatives?
openseeface-gd - A Godot 3.x addon for OpenSeeFace
UniVRM - UniVRM is a gltf-based VRM format implementation for Unity. English is here https://vrm.dev/en/ . 日本語 はこちら https://vrm.dev/
kalidokit - Blendshape and kinematics calculator for Mediapipe/Tensorflow.js Face, Eyes, Pose, and Finger tracking models.
fastT5 - ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
Towards-Explainable-AI-System-for-Traffic-Sign-Recognition-and-Deployment-in-a-Simulated-Environment - This project is part of the CS course 'Systems Engineering Meets Life Sciences I' at Goethe University Frankfurt. In this Computer Vision project, we present our first attempt at tackling the problem of traffic sign recognition using a systems engineering approach.
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
Insta-DM - Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency (AAAI 2021)
facenet-pytorch - Pretrained Pytorch face detection (MTCNN) and facial recognition (InceptionResnet) models
VTuber_Unity - Use Unity 3D character and Python deep learning algorithms to stream as a VTuber!
merged_depth - Monocular Depth Estimation - Weighted-average prediction from multiple pre-trained depth estimation models
vpuppr - VTuber application made with Godot 3.4