Thin-Plate-Spline-Motion-Model
CVPR2022-DaGAN
Our great sponsors
Thin-Plate-Spline-Motion-Model | CVPR2022-DaGAN | |
---|---|---|
28 | 5 | |
3,289 | 936 | |
- | - | |
1.9 | 5.8 | |
3 months ago | 5 months ago | |
Jupyter Notebook | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Thin-Plate-Spline-Motion-Model
-
Okay, that's Ai but how?
Exactly what I was thinking, looks a lot like thin plate spline motion, maybe with some layers/composition for the wings and hair.
- Is it possible to sync a lip and facial expression animation with audio in real time?
- GitHub - yoyo-nb/Thin-Plate-Spline-Motion-Model: [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation. (Question: How do I increase the resolution of the output?)
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
First Order Motion Model/Thin Plate Spline (Animate Single images realistically using a driving video) https://github.com/AliaksandrSiarohin/first-order-model (FOMM - Animate still images using driving videos) https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model (Thin Plate Spline - Likely just a repost of FOMM but with better documentation and tutorials on YouTube) https://drive.google.com/drive/folders/1PyQJmkdCsAkOYwUyaj_l-l0as-iLDgeH (FOMM/Thin Plate Checkpoints) https://disk.yandex.com/d/lEw8uRm140L_eQ (FOMM/Thin Plate Checkpoints mirror) -------3D ANIMATION--------
-
Help from Community [Development]
GitHub - yoyo-nb/Thin-Plate-Spline-Motion-Model: [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
- Elvis & James Blunt singing together - doing Elvis voice synthesis & using Thin-Plate-Spline model for a cheap fast deepfake video to sync
- Does Anyone Know What Tool is used to Make These TikTok Videos?
-
How did he do this?
Probably with using this https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model
-
Animate your stable diffusion portraits
Use https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-ModelHuggingface demo: https://huggingface.co/spaces/CVPR/Image-Animation-using-Thin-Plate-Spline-Motion-ModelGoogle Colab: https://colab.research.google.com/drive/1DREfdpnaBhqISg0fuQlAAIwyGVn1loH_?usp=sharing
- SD + thin-plate-spline-motion-model
CVPR2022-DaGAN
-
DaGAN++: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
Predominant techniques on talking head generation largely depend on 2D information, including facial appearances and motions from input face images. Nevertheless, dense 3D facial geometry, such as pixel-wise depth, plays a critical role in constructing accurate 3D facial structures and suppressing complex background noises for generation. However, dense 3D annotations for facial videos is prohibitively costly to obtain. In this work, firstly, we present a novel self-supervised method for learning dense 3D facial geometry (ie, depth) from face videos, without requiring camera parameters and 3D geometry annotations in training. We further propose a strategy to learn pixel-level uncertainties to perceive more reliable rigid-motion pixels for geometry learning. Secondly, we design an effective geometry-guided facial keypoint estimation module, providing accurate keypoints for generating motion fields. Lastly, we develop a 3D-aware cross-modal (ie, appearance and depth) attention mechanism, which can be applied to each generation layer, to capture facial geometries in a coarse-to-fine manner. Extensive experiments are conducted on three challenging benchmarks (ie, VoxCeleb1, VoxCeleb2, and HDTF). The results demonstrate that our proposed framework can generate highly realistic-looking reenacted talking videos, with new state-of-the-art performances established on these benchmarks. The codes and trained models are publicly available on the GitHub project page at https://github.com/harlanhong/CVPR2022-DaGAN
-
Animating generated face test
I use https://github.com/harlanhong/CVPR2022-DaGAN it's supposedly faster than TPSMM.
-
Using SD to make 'deepfakes' demo
Picture to Animation : Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR 2022) https://github.com/harlanhong/CVPR2022-DaGAN This gave me Picture to Animation.
-
Waifu diffusion - reanimation with DaGAN
Thanks, I will take a look. Do you have any more info on the image segmentation part, was looking through the github and could not find anything, only on face alignment: https://github.com/harlanhong/CVPR2022-DaGAN/tree/master/face-alignment
What are some alternatives?
first-order-model - This repository contains the source code for the paper First Order Motion Model for Image Animation
GeneFace - GeneFace: Generalized and High-Fidelity 3D Talking Face Synthesis; ICLR 2023; Official code
Wav2Lip - This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
sd-wav2lip-uhq - Wav2Lip UHQ extension for Automatic1111
DFL-Colab - DeepFaceLab fork which provides IPython Notebook to use DFL with Google Colab
Face-Depth-Network - The component of DaGAN (CVPR 2022)
SadTalker - [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
PaddleGAN - PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer, Wav2Lip, picture repair, image editing, photo2cartoon, image style transfer, GPEN, and so on.
articulated-animation - Code for Motion Representations for Articulated Animation paper
wunjo.wladradchenko.ru - Wunjo AI: Synthesize & clone voices in English, Russian & Chinese, real-time speech recognition, deepfake face & lips animation, face swap with one photo, change video by text prompts, segmentation, and retouching. Open-source, local & free.
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
awesome-talking-head-generation