facenet-pytorch
OpenSeeFace
Our great sponsors
facenet-pytorch | OpenSeeFace | |
---|---|---|
4 | 7 | |
4,144 | 1,312 | |
- | - | |
3.8 | 4.2 | |
19 days ago | 2 months ago | |
Python | Python | |
MIT License | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
facenet-pytorch
-
[D] Fast face recognition over video
Hijacking this comment because i've been working nonstop on my project thanks to your suggestion. I'm now using this https://github.com/derronqi/yolov8-face for face detection and still the old face_recognition for encodings. I'm clustering with dbscan and extracting frames with ffmpeg with -hwaccel on. I'm planning to try this: https://github.com/timesler/facenet-pytorch as it looks like it would be the fastest thing avaiable to process videos? Keep in mind i need to perform encoding other than just detection because i want to use DBscan (and later also facial recognition, but this might be done separately just by saving the encodings). let me know if you have any other suggestions, and thanks again for your help
-
Random but unrepeated combinations?
For now, I am trying to evaluate and get the accuracy of the FaceNet module. Like this example on facenet-pytorch, getting the accuracy relies on this file (pairs.txt) provided by the official site. Format description below:
-
Need to watch through 100s of hours of surveylance footage - AI solution?
with some python knowledge you can try a two step procedure: 1) extract a number of frames per second, for example five frames (images, i.e. still frames) per second using opencv or ffmpeg 2) Using facenet: detect faces in frames and then classify them by comparing each image to a known image of the person you are looking for.
-
Query regarding Multiple face recognization system
It's generally better to split the task into a multiple tasks. First I'd want to detect and extract faces. There are a number of pretrained models that you could use for that, e.g. https://github.com/timesler/facenet-pytorch, https://github.com/opencv/opencv/tree/master/data/haarcascades. Once you've extracted faces, you can train a facial recognition using something like a siamese network as you normally would.
OpenSeeFace
-
Getting face feature pose statistics
I got something working modifying OpenSeeFace and it's an option and I might try to rewrite it in something compiled, but I'd like to look at the other options first.
-
This may be a silly question but can I hire someone to make me a customized avatar for vr chat?
Lastly, face tracking is either built in or uses a plugin device. You would also use OSC to manipulate blendshapes. I'd take a look at Opens Face.
-
Any recommendations for VTuber setup on Linux? Ideally something that's completely open source.
Th best/only options I found were basically the old standby of Wine running closed source applications. Even then the end result was rather incomplete... because may of the best tracking options are simply closed source. (Note: Vseeface does offer an open source library)
- Running OpenSeeFace on Linux with python 3.10
-
Open map with gaze tracking for someone with paralysis
There are only a few libraries that come to mind but take a bit of work to get started. MediaPipe Unity Plugin has eye tracking with a whole lot of types of tracking(head, hands, body). OpenSeeFace has models that do head and eye tracking. This repo uses Unity's neural net inference library, Barracuda, to run a MediaPipe iris landmark model (I haven't personally tested this library). Not sure how to translate eye landmarks to the position a player is looking at in a screen though. Hopefully this list of libraries gets you on the right path!
-
I'm going to try to use VSeeFace again, is there a way to change how lip syncing works?
OpenSeeFace is open source. If you are using it with VSeeFace, you can just replace the Binary folder with your own build.
-
I'm making a renderer for facetracking data
It uses OpenSeeFace for facetracking and engine patches/vrm code from the V-Sekai team.
What are some alternatives?
anime-face-detector - Anime Face Detector using mmdet and mmpose
openseeface-gd - A GUI for running OpenSeeFace.
CompreFace - Leading free and open-source face recognition system
kalidokit - Blendshape and kinematics calculator for Mediapipe/Tensorflow.js Face, Eyes, Pose, and Finger tracking models.
OpenCV - Open Source Computer Vision Library
UniVRM - UniVRM is a gltf-based VRM format implementation for Unity. English is here https://vrm.dev/en/ . 日本語 はこちら https://vrm.dev/
pytorch2keras - PyTorch to Keras model convertor
vpuppr - VTuber application made with Godot 4
facenet - Face recognition using Tensorflow
VTuber_Unity - Use Unity 3D character and Python deep learning algorithms to stream as a VTuber!
DeepFake-Detection - Towards deepfake detection that actually works
fastT5 - ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.