CVPR2022-DaGAN VS Thin-Plate-Spline-Motion-Model

Compare CVPR2022-DaGAN vs Thin-Plate-Spline-Motion-Model and see what are their differences.

CVPR2022-DaGAN

Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation (by harlanhong)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
CVPR2022-DaGAN Thin-Plate-Spline-Motion-Model
5 28
936 3,289
- -
5.8 1.9
5 months ago 3 months ago
Python Jupyter Notebook
GNU General Public License v3.0 or later MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

CVPR2022-DaGAN

Posts with mentions or reviews of CVPR2022-DaGAN. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-11-11.
  • DaGAN++: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
    1 project | /r/BotNewsPreprints | 11 May 2023
    Predominant techniques on talking head generation largely depend on 2D information, including facial appearances and motions from input face images. Nevertheless, dense 3D facial geometry, such as pixel-wise depth, plays a critical role in constructing accurate 3D facial structures and suppressing complex background noises for generation. However, dense 3D annotations for facial videos is prohibitively costly to obtain. In this work, firstly, we present a novel self-supervised method for learning dense 3D facial geometry (ie, depth) from face videos, without requiring camera parameters and 3D geometry annotations in training. We further propose a strategy to learn pixel-level uncertainties to perceive more reliable rigid-motion pixels for geometry learning. Secondly, we design an effective geometry-guided facial keypoint estimation module, providing accurate keypoints for generating motion fields. Lastly, we develop a 3D-aware cross-modal (ie, appearance and depth) attention mechanism, which can be applied to each generation layer, to capture facial geometries in a coarse-to-fine manner. Extensive experiments are conducted on three challenging benchmarks (ie, VoxCeleb1, VoxCeleb2, and HDTF). The results demonstrate that our proposed framework can generate highly realistic-looking reenacted talking videos, with new state-of-the-art performances established on these benchmarks. The codes and trained models are publicly available on the GitHub project page at https://github.com/harlanhong/CVPR2022-DaGAN
  • Animating generated face test
    2 projects | /r/StableDiffusion | 11 Nov 2022
    I use https://github.com/harlanhong/CVPR2022-DaGAN it's supposedly faster than TPSMM.
  • Using SD to make 'deepfakes' demo
    2 projects | /r/StableDiffusion | 13 Oct 2022
    Picture to Animation : Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR 2022) https://github.com/harlanhong/CVPR2022-DaGAN This gave me Picture to Animation.
  • Waifu diffusion - reanimation with DaGAN
    3 projects | /r/StableDiffusion | 15 Sep 2022
    Thanks, I will take a look. Do you have any more info on the image segmentation part, was looking through the github and could not find anything, only on face alignment: https://github.com/harlanhong/CVPR2022-DaGAN/tree/master/face-alignment

Thin-Plate-Spline-Motion-Model

Posts with mentions or reviews of Thin-Plate-Spline-Motion-Model. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-12.

What are some alternatives?

When comparing CVPR2022-DaGAN and Thin-Plate-Spline-Motion-Model you can also consider the following projects:

GeneFace - GeneFace: Generalized and High-Fidelity 3D Talking Face Synthesis; ICLR 2023; Official code

first-order-model - This repository contains the source code for the paper First Order Motion Model for Image Animation

sd-wav2lip-uhq - Wav2Lip UHQ extension for Automatic1111

Wav2Lip - This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs

Face-Depth-Network - The component of DaGAN (CVPR 2022)

DFL-Colab - DeepFaceLab fork which provides IPython Notebook to use DFL with Google Colab

PaddleGAN - PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer, Wav2Lip, picture repair, image editing, photo2cartoon, image style transfer, GPEN, and so on.

SadTalker - [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation

wunjo.wladradchenko.ru - Wunjo AI: Synthesize & clone voices in English, Russian & Chinese, real-time speech recognition, deepfake face & lips animation, face swap with one photo, change video by text prompts, segmentation, and retouching. Open-source, local & free.

articulated-animation - Code for Motion Representations for Articulated Animation paper

awesome-talking-head-generation

stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI