CVPR2022-DaGAN
ControllableTalkNet
Our great sponsors
CVPR2022-DaGAN | ControllableTalkNet | |
---|---|---|
5 | 2 | |
936 | 46 | |
- | - | |
5.8 | 1.8 | |
5 months ago | 9 months ago | |
Python | Jupyter Notebook | |
GNU General Public License v3.0 or later | Creative Commons Zero v1.0 Universal |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CVPR2022-DaGAN
-
DaGAN++: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
Predominant techniques on talking head generation largely depend on 2D information, including facial appearances and motions from input face images. Nevertheless, dense 3D facial geometry, such as pixel-wise depth, plays a critical role in constructing accurate 3D facial structures and suppressing complex background noises for generation. However, dense 3D annotations for facial videos is prohibitively costly to obtain. In this work, firstly, we present a novel self-supervised method for learning dense 3D facial geometry (ie, depth) from face videos, without requiring camera parameters and 3D geometry annotations in training. We further propose a strategy to learn pixel-level uncertainties to perceive more reliable rigid-motion pixels for geometry learning. Secondly, we design an effective geometry-guided facial keypoint estimation module, providing accurate keypoints for generating motion fields. Lastly, we develop a 3D-aware cross-modal (ie, appearance and depth) attention mechanism, which can be applied to each generation layer, to capture facial geometries in a coarse-to-fine manner. Extensive experiments are conducted on three challenging benchmarks (ie, VoxCeleb1, VoxCeleb2, and HDTF). The results demonstrate that our proposed framework can generate highly realistic-looking reenacted talking videos, with new state-of-the-art performances established on these benchmarks. The codes and trained models are publicly available on the GitHub project page at https://github.com/harlanhong/CVPR2022-DaGAN
-
Animating generated face test
I use https://github.com/harlanhong/CVPR2022-DaGAN it's supposedly faster than TPSMM.
-
Using SD to make 'deepfakes' demo
Picture to Animation : Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR 2022) https://github.com/harlanhong/CVPR2022-DaGAN This gave me Picture to Animation.
-
Waifu diffusion - reanimation with DaGAN
Thanks, I will take a look. Do you have any more info on the image segmentation part, was looking through the github and could not find anything, only on face alignment: https://github.com/harlanhong/CVPR2022-DaGAN/tree/master/face-alignment
ControllableTalkNet
-
Do you know where I can download training data and finished models for controllable talknet?
I want to make sure my setup works before going through the struggle of training a model myself, i'm using this for reference https://github.com/justinjohn0306/ControllableTalkNet
-
Using SD to make 'deepfakes' demo
https://github.com/justinjohn0306/ControllableTalkNet Voice Synthesis tranined on 1hr of audio books. Dont be fooled by the quality a audio reference was used otherwise the audio is usually average sounding.
What are some alternatives?
Thin-Plate-Spline-Motion-Model - [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
GeneFace - GeneFace: Generalized and High-Fidelity 3D Talking Face Synthesis; ICLR 2023; Official code
sd-wav2lip-uhq - Wav2Lip UHQ extension for Automatic1111
Face-Depth-Network - The component of DaGAN (CVPR 2022)
PaddleGAN - PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer, Wav2Lip, picture repair, image editing, photo2cartoon, image style transfer, GPEN, and so on.
wunjo.wladradchenko.ru - Wunjo AI: Synthesize & clone voices in English, Russian & Chinese, real-time speech recognition, deepfake face & lips animation, face swap with one photo, change video by text prompts, segmentation, and retouching. Open-source, local & free.
awesome-talking-head-generation