OpenSeeFace
mediapipe
OpenSeeFace | mediapipe | |
---|---|---|
7 | 49 | |
1,327 | 25,688 | |
- | 1.9% | |
4.2 | 9.9 | |
3 months ago | 5 days ago | |
Python | C++ | |
BSD 2-clause "Simplified" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
OpenSeeFace
-
Getting face feature pose statistics
I got something working modifying OpenSeeFace and it's an option and I might try to rewrite it in something compiled, but I'd like to look at the other options first.
-
This may be a silly question but can I hire someone to make me a customized avatar for vr chat?
Lastly, face tracking is either built in or uses a plugin device. You would also use OSC to manipulate blendshapes. I'd take a look at Opens Face.
-
Any recommendations for VTuber setup on Linux? Ideally something that's completely open source.
Th best/only options I found were basically the old standby of Wine running closed source applications. Even then the end result was rather incomplete... because may of the best tracking options are simply closed source. (Note: Vseeface does offer an open source library)
- Running OpenSeeFace on Linux with python 3.10
-
Open map with gaze tracking for someone with paralysis
There are only a few libraries that come to mind but take a bit of work to get started. MediaPipe Unity Plugin has eye tracking with a whole lot of types of tracking(head, hands, body). OpenSeeFace has models that do head and eye tracking. This repo uses Unity's neural net inference library, Barracuda, to run a MediaPipe iris landmark model (I haven't personally tested this library). Not sure how to translate eye landmarks to the position a player is looking at in a screen though. Hopefully this list of libraries gets you on the right path!
-
I'm going to try to use VSeeFace again, is there a way to change how lip syncing works?
OpenSeeFace is open source. If you are using it with VSeeFace, you can just replace the Binary folder with your own build.
-
I'm making a renderer for facetracking data
It uses OpenSeeFace for facetracking and engine patches/vrm code from the V-Sekai team.
mediapipe
-
Mediapipe openpose Controlnet model for SD
mediapipe/docs/solutions/pose.md at master · google/mediapipe · GitHub
-
MEDIAPIPE on-device diffusion plugins for conditioned text-to-image generation
Today, we announce MediaPipe diffusion plugins, which enable controllable text-to-image generation to be run on-device. Expanding upon our prior work on GPU inference for on-device large generative models, we introduce new low-cost solutions for controllable text-to-image generation that can be plugged into existing diffusion models and their Low-Rank Adaptation (LoRA) variants.
-
Running a TensorFlow object detector model and drawing boxes around objects at 60 FPS - all in React Native / JavaScript!
You can just grab the TFLite version! https://github.com/google/mediapipe/blob/master/docs/solutions/models.md
-
OpenAI came after our domain because we use GPT in it
I believe Google already released transformers under an apache 2 license with a patent grant:
https://github.com/google/mediapipe/blob/master/mediapipe/mo...
-
Open source Background Remover: Remove Background from images and video using AI
I was going to say that I like the MediaPipe Selfie Segmentation model for doing this sort of thing in a web page, but I've just noticed (when getting the GitHub link[1]) that Google have marked the code as legacy[2] ... no idea if the new solution is better/easier to use[3].
For what it's worth, my CodePen using the old model is here: https://codepen.io/kaliedarik/pen/PopBxBM
[1] - https://github.com/google/mediapipe/blob/master/docs/solutio...
[2] - "Attention: Thank you for your interest in MediaPipe Solutions. As of April 4, 2023, this solution was upgraded to a new MediaPipe Solution."
[3] - https://developers.google.com/mediapipe/solutions/vision/ima...
-
[P] Pattern recognition
I have used mediapipe very successfully in multiple projects and it's very easy to get running. You can choose from many different vision tasks including hand landmarks ( https://github.com/google/mediapipe/blob/master/docs/solutions/hands.md )
-
Getting face feature pose statistics
I found MediaPipe's Face Mesh and was impressed with how simple it was to get going, but it just gives you the landmark points and I've not gone any further yet.
-
New ControlNet Face Model
We've trained ControlNet on a subset of the LAION-Face dataset using modified output from MediaPipe's face mesh annotator to provide a new level of control when generating images of faces.
-
Trained an ML model using TensorFlow.js to classify American Sign Language (ASL) alphabets on browser. We are creating an open-source platform and would love to receive your feedback on our project.
Medipaipe library link: https://mediapipe.dev/
-
mediapipe VS daisykit - a user suggested alternative
2 projects | 24 Mar 2023
What are some alternatives?
openseeface-gd - A GUI for running OpenSeeFace.
openpose - OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation
kalidokit - Blendshape and kinematics calculator for Mediapipe/Tensorflow.js Face, Eyes, Pose, and Finger tracking models.
ue4-mediapipe-plugin - UE4 MediaPipe plugin
UniVRM - UniVRM is a gltf-based VRM format implementation for Unity. English is here https://vrm.dev/en/ . 日本語 はこちら https://vrm.dev/
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
vpuppr - VTuber application made with Godot 4
AlphaPose - Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System
VTuber_Unity - Use Unity 3D character and Python deep learning algorithms to stream as a VTuber!
BlazePose-tensorflow - A third-party Tensorflow Implementation for paper "BlazePose: On-device Real-time Body Pose tracking".
fastT5 - ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
jeelizFaceFilter - Javascript/WebGL lightweight face tracking library designed for augmented reality webcam filters. Features : multiple faces detection, rotation, mouth opening. Various integration examples are provided (Three.js, Babylon.js, FaceSwap, Canvas2D, CSS3D...).