Thin-Plate-Spline-Motion-Model
SadTalker
Our great sponsors
Thin-Plate-Spline-Motion-Model | SadTalker | |
---|---|---|
28 | 16 | |
3,289 | 10,394 | |
- | 12.5% | |
1.9 | 6.9 | |
3 months ago | 10 days ago | |
Jupyter Notebook | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Thin-Plate-Spline-Motion-Model
-
Okay, that's Ai but how?
Exactly what I was thinking, looks a lot like thin plate spline motion, maybe with some layers/composition for the wings and hair.
- Is it possible to sync a lip and facial expression animation with audio in real time?
- GitHub - yoyo-nb/Thin-Plate-Spline-Motion-Model: [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation. (Question: How do I increase the resolution of the output?)
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
First Order Motion Model/Thin Plate Spline (Animate Single images realistically using a driving video) https://github.com/AliaksandrSiarohin/first-order-model (FOMM - Animate still images using driving videos) https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model (Thin Plate Spline - Likely just a repost of FOMM but with better documentation and tutorials on YouTube) https://drive.google.com/drive/folders/1PyQJmkdCsAkOYwUyaj_l-l0as-iLDgeH (FOMM/Thin Plate Checkpoints) https://disk.yandex.com/d/lEw8uRm140L_eQ (FOMM/Thin Plate Checkpoints mirror) -------3D ANIMATION--------
-
Help from Community [Development]
GitHub - yoyo-nb/Thin-Plate-Spline-Motion-Model: [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
- Elvis & James Blunt singing together - doing Elvis voice synthesis & using Thin-Plate-Spline model for a cheap fast deepfake video to sync
- Does Anyone Know What Tool is used to Make These TikTok Videos?
-
How did he do this?
Probably with using this https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model
-
Animate your stable diffusion portraits
Use https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-ModelHuggingface demo: https://huggingface.co/spaces/CVPR/Image-Animation-using-Thin-Plate-Spline-Motion-ModelGoogle Colab: https://colab.research.google.com/drive/1DREfdpnaBhqISg0fuQlAAIwyGVn1loH_?usp=sharing
- SD + thin-plate-spline-motion-model
SadTalker
- Can some expert analyze a github repo and tell us if it's really safe or not?
-
Does the sad talker repo contain a virus/trojan yes or not?
Trojan detected when uncompressing facevid2vid_00189-model.pth · Issue #75 · OpenTalker/SadTalker (github.com)
-
Lip Sync API Service?
I am using SadTalker to create a lipsync of a still image with an audio file. The still image is from Stable Diffusion and the audio is from ChatGPT and then AWS Polly for the voice synthesis. My problem is that even though I like the results it takes one and a half minutes to generate this video. If I use the enhancer it is more like five minutes. I am using a A10 NVIDIA GPU.
-
SD + Augmented Reality
Stable Diffusion A1111 + Sadtalker Extension - https://github.com/OpenTalker/SadTalker.git
- Are there any plugins that allow you to lip-sync/move faces?
-
Judy Collins animation generated with HeyGen
Isn't this just SadTalker?
- [D] Better alternatives to Wav2Lip?
-
😋 AGI (bark 🐶) Smart waitress 🎙️
🎥 OpenTalker/SadTalker
- I just got into SD, and discovering all the different extensions has been a lot of fun. Yesterday, I stumbled across SadTalker...audio source in comments.
- Testing a new prompt-speech to video extension for A1111 stable-diffusion-webui from one single image
What are some alternatives?
first-order-model - This repository contains the source code for the paper First Order Motion Model for Image Animation
bark - 🔊 Text-Prompted Generative Audio Model
Wav2Lip - This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
sd-wav2lip-uhq - Wav2Lip UHQ extension for Automatic1111
DFL-Colab - DeepFaceLab fork which provides IPython Notebook to use DFL with Google Colab
GeneFace - GeneFace: Generalized and High-Fidelity 3D Talking Face Synthesis; ICLR 2023; Official code
articulated-animation - Code for Motion Representations for Articulated Animation paper
openscene - [CVPR'23] OpenScene: 3D Scene Understanding with Open Vocabularies
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
Painter - Painter & SegGPT Series: Vision Foundation Models from BAAI
CVPR2022-DaGAN - Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
bark-speaker-directory - Site for sharing Bark voices