CVPR2022-DaGAN
Face-Depth-Network
Our great sponsors
CVPR2022-DaGAN | Face-Depth-Network | |
---|---|---|
5 | 1 | |
936 | 20 | |
- | - | |
5.8 | 10.0 | |
5 months ago | almost 2 years ago | |
Python | Jupyter Notebook | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CVPR2022-DaGAN
-
DaGAN++: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
Predominant techniques on talking head generation largely depend on 2D information, including facial appearances and motions from input face images. Nevertheless, dense 3D facial geometry, such as pixel-wise depth, plays a critical role in constructing accurate 3D facial structures and suppressing complex background noises for generation. However, dense 3D annotations for facial videos is prohibitively costly to obtain. In this work, firstly, we present a novel self-supervised method for learning dense 3D facial geometry (ie, depth) from face videos, without requiring camera parameters and 3D geometry annotations in training. We further propose a strategy to learn pixel-level uncertainties to perceive more reliable rigid-motion pixels for geometry learning. Secondly, we design an effective geometry-guided facial keypoint estimation module, providing accurate keypoints for generating motion fields. Lastly, we develop a 3D-aware cross-modal (ie, appearance and depth) attention mechanism, which can be applied to each generation layer, to capture facial geometries in a coarse-to-fine manner. Extensive experiments are conducted on three challenging benchmarks (ie, VoxCeleb1, VoxCeleb2, and HDTF). The results demonstrate that our proposed framework can generate highly realistic-looking reenacted talking videos, with new state-of-the-art performances established on these benchmarks. The codes and trained models are publicly available on the GitHub project page at https://github.com/harlanhong/CVPR2022-DaGAN
-
Animating generated face test
I use https://github.com/harlanhong/CVPR2022-DaGAN it's supposedly faster than TPSMM.
-
Using SD to make 'deepfakes' demo
Picture to Animation : Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR 2022) https://github.com/harlanhong/CVPR2022-DaGAN This gave me Picture to Animation.
-
Waifu diffusion - reanimation with DaGAN
Thanks, I will take a look. Do you have any more info on the image segmentation part, was looking through the github and could not find anything, only on face alignment: https://github.com/harlanhong/CVPR2022-DaGAN/tree/master/face-alignment
Face-Depth-Network
What are some alternatives?
Thin-Plate-Spline-Motion-Model - [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
HIPT - Hierarchical Image Pyramid Transformer - CVPR 2022 (Oral)
GeneFace - GeneFace: Generalized and High-Fidelity 3D Talking Face Synthesis; ICLR 2023; Official code
nn - 🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
sd-wav2lip-uhq - Wav2Lip UHQ extension for Automatic1111
VToonify - [SIGGRAPH Asia 2022] VToonify: Controllable High-Resolution Portrait Video Style Transfer
PaddleGAN - PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer, Wav2Lip, picture repair, image editing, photo2cartoon, image style transfer, GPEN, and so on.
wunjo.wladradchenko.ru - Wunjo AI: Synthesize & clone voices in English, Russian & Chinese, real-time speech recognition, deepfake face & lips animation, face swap with one photo, change video by text prompts, segmentation, and retouching. Open-source, local & free.
awesome-talking-head-generation
ControllableTalkNet - This is a modified version of NVIDIA's TalkNet. It is a controllable network that can be used for both CPU and GPU inference.