Thin-Plate-Spline-Motion-Model
DFL-Colab
Our great sponsors
Thin-Plate-Spline-Motion-Model | DFL-Colab | |
---|---|---|
28 | 4 | |
3,289 | 1,038 | |
- | - | |
1.9 | 2.0 | |
3 months ago | 12 months ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Thin-Plate-Spline-Motion-Model
-
Okay, that's Ai but how?
Exactly what I was thinking, looks a lot like thin plate spline motion, maybe with some layers/composition for the wings and hair.
- Is it possible to sync a lip and facial expression animation with audio in real time?
- GitHub - yoyo-nb/Thin-Plate-Spline-Motion-Model: [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation. (Question: How do I increase the resolution of the output?)
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
First Order Motion Model/Thin Plate Spline (Animate Single images realistically using a driving video) https://github.com/AliaksandrSiarohin/first-order-model (FOMM - Animate still images using driving videos) https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model (Thin Plate Spline - Likely just a repost of FOMM but with better documentation and tutorials on YouTube) https://drive.google.com/drive/folders/1PyQJmkdCsAkOYwUyaj_l-l0as-iLDgeH (FOMM/Thin Plate Checkpoints) https://disk.yandex.com/d/lEw8uRm140L_eQ (FOMM/Thin Plate Checkpoints mirror) -------3D ANIMATION--------
-
Help from Community [Development]
GitHub - yoyo-nb/Thin-Plate-Spline-Motion-Model: [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
- Elvis & James Blunt singing together - doing Elvis voice synthesis & using Thin-Plate-Spline model for a cheap fast deepfake video to sync
- Does Anyone Know What Tool is used to Make These TikTok Videos?
-
How did he do this?
Probably with using this https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model
-
Animate your stable diffusion portraits
Use https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-ModelHuggingface demo: https://huggingface.co/spaces/CVPR/Image-Animation-using-Thin-Plate-Spline-Motion-ModelGoogle Colab: https://colab.research.google.com/drive/1DREfdpnaBhqISg0fuQlAAIwyGVn1loH_?usp=sharing
- SD + thin-plate-spline-motion-model
DFL-Colab
- [Deepfakessfw] Meilleur moyen de créer un deepfake pour vidéos avec un logiciel gratuit pour le créer
-
What’s the best way to get started in deep fake video as a complete noob with a Mac and plenty of programming experience (but not anything media or numerical)?
If you mean face swaps, then DeepFaceLab is probably still the best, https://github.com/iperov/DeepFaceLab My rig is a potato so I use the google colab, https://github.com/chervonij/DFL-Colab
-
best way to create a deepfake for videos with free software to create it
I think the quickest way to start is using the Deepfacelab Google Colab. You can find different guides using it. AFAIK it can't get as precise as a DeepFaceLab install on a local computer where people can adjust the masks etc. but I think it gives a good idea of the workflow and how it works.
-
Questions about DeepFaceLab
I like the google colab, https://github.com/chervonij/DFL-Colab
What are some alternatives?
first-order-model - This repository contains the source code for the paper First Order Motion Model for Image Animation
DeepFaceLab - DeepFaceLab is the leading software for creating deepfakes.
Wav2Lip - This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
deepfake-detection - DeepFake Detection: Detect the video is fake or not using InceptionResNetV2.
SadTalker - [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
nn - 🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
articulated-animation - Code for Motion Representations for Articulated Animation paper
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
CVPR2022-DaGAN - Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
dressing-in-order - (ICCV'21) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing" by Aiyu Cui, Daniel McKee and Svetlana Lazebnik
vid2vid - Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
EasyMocap - Make human motion capture easier.