OpenSeeFace
kalidokit
Our great sponsors
OpenSeeFace | kalidokit | |
---|---|---|
7 | 14 | |
1,312 | 5,197 | |
- | - | |
4.2 | 1.5 | |
2 months ago | 9 months ago | |
Python | TypeScript | |
BSD 2-clause "Simplified" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
OpenSeeFace
-
Getting face feature pose statistics
I got something working modifying OpenSeeFace and it's an option and I might try to rewrite it in something compiled, but I'd like to look at the other options first.
-
This may be a silly question but can I hire someone to make me a customized avatar for vr chat?
Lastly, face tracking is either built in or uses a plugin device. You would also use OSC to manipulate blendshapes. I'd take a look at Opens Face.
-
Any recommendations for VTuber setup on Linux? Ideally something that's completely open source.
Th best/only options I found were basically the old standby of Wine running closed source applications. Even then the end result was rather incomplete... because may of the best tracking options are simply closed source. (Note: Vseeface does offer an open source library)
- Running OpenSeeFace on Linux with python 3.10
-
Open map with gaze tracking for someone with paralysis
There are only a few libraries that come to mind but take a bit of work to get started. MediaPipe Unity Plugin has eye tracking with a whole lot of types of tracking(head, hands, body). OpenSeeFace has models that do head and eye tracking. This repo uses Unity's neural net inference library, Barracuda, to run a MediaPipe iris landmark model (I haven't personally tested this library). Not sure how to translate eye landmarks to the position a player is looking at in a screen though. Hopefully this list of libraries gets you on the right path!
-
I'm going to try to use VSeeFace again, is there a way to change how lip syncing works?
OpenSeeFace is open source. If you are using it with VSeeFace, you can just replace the Binary folder with your own build.
-
I'm making a renderer for facetracking data
It uses OpenSeeFace for facetracking and engine patches/vrm code from the V-Sekai team.
kalidokit
-
Avatar Pupettering with ThreeJS, ReadyPlayerMe, Kalidokit and MediaPipe
I try to animate a ReadyPlayer Me avatar using ThreeJS and Kalidokit (or something else) with MediaPipe Hollisitc Pose. Here is a working JSFiddle :
-
Any idea how to stream Kalidoface data to Unreal Engine Live Link so we can use it to drive Metahuman? or an alternative app for full body + face + fingers markerless mocap that works with UE Live Link?
kalidoface is made by /u/YeeMachineDev and seems to be fully open source https://github.com/yeemachine/kalidokit https://github.com/yeemachine/kalidoface-3d
- cute live2d
-
How To Make Vtuber Softwares?
In terms of software, what you linked used optical tracking. The underlying tech is commonly called machine/computer vision. You can develop your own tracking software, but there's some open source solutions, like OpenCV than make it faster to build apps. (You can use OpenCV with Python bindings to learn the basics somewhat quickly). What you linked uses Kalidokit, an open source solution using TensorFlow.js and Google's MediaPipe.
-
Vtube Studio Pro question
oh my apologies! may I suggest Kalidoface then? https://kalidoface.com/ free face and arm tracking! plus they have a very friendly discord for troubleshooting
-
I want to mod Kalidokit but I don't know where to start from
[JS newbie here] I just discovered kalidokit, a vtuber JS app by yeemachine, and I'm trying to figure out how I could play with its look and feel. Do you have any recommendations on how to proceed? In short, I don't know what are the relevant files in the repo, how they are connected and how to run the app on a local machine. Any help would be very helpful! Thanks.
-
Cool (online) places for 2022
kalidokit
- [P] KalidoKit – Face, Pose, and Hand Tracking Kinematics
- "KalidoKit - Face, Pose, and Hand Tracking Kinematics" (in-browser animation of streamers)
- "KalidoKit - Face, Pose, and Hand Tracking Kinematics" (in-browser animation of streamer)
What are some alternatives?
openseeface-gd - A GUI for running OpenSeeFace.
face-api.js - JavaScript API for face detection and face recognition in the browser and nodejs with tensorflow.js
UniVRM - UniVRM is a gltf-based VRM format implementation for Unity. English is here https://vrm.dev/en/ . 日本語 はこちら https://vrm.dev/
Hand-Gesture-Recognition
vpuppr - VTuber application made with Godot 4
ProsePainter
VTuber_Unity - Use Unity 3D character and Python deep learning algorithms to stream as a VTuber!
gaze-detection - 👀 Use machine learning in JavaScript to detect eye movements and build gaze-controlled experiences.
fastT5 - ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
holo-schedule - One browser extension COVERs all scheduled and guerrilla livestreams.
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
whereami.js - Node.js module to predict indoor location using machine learning and WiFi information 📶