Thin-Plate-Spline-Motion-Model
ebsynth
Thin-Plate-Spline-Motion-Model | ebsynth | |
---|---|---|
28 | 77 | |
3,297 | 1,447 | |
- | - | |
1.9 | 0.0 | |
3 months ago | 11 months ago | |
Jupyter Notebook | C | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Thin-Plate-Spline-Motion-Model
-
Okay, that's Ai but how?
Exactly what I was thinking, looks a lot like thin plate spline motion, maybe with some layers/composition for the wings and hair.
- Is it possible to sync a lip and facial expression animation with audio in real time?
- GitHub - yoyo-nb/Thin-Plate-Spline-Motion-Model: [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation. (Question: How do I increase the resolution of the output?)
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
First Order Motion Model/Thin Plate Spline (Animate Single images realistically using a driving video) https://github.com/AliaksandrSiarohin/first-order-model (FOMM - Animate still images using driving videos) https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model (Thin Plate Spline - Likely just a repost of FOMM but with better documentation and tutorials on YouTube) https://drive.google.com/drive/folders/1PyQJmkdCsAkOYwUyaj_l-l0as-iLDgeH (FOMM/Thin Plate Checkpoints) https://disk.yandex.com/d/lEw8uRm140L_eQ (FOMM/Thin Plate Checkpoints mirror) -------3D ANIMATION--------
-
Help from Community [Development]
GitHub - yoyo-nb/Thin-Plate-Spline-Motion-Model: [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
- Elvis & James Blunt singing together - doing Elvis voice synthesis & using Thin-Plate-Spline model for a cheap fast deepfake video to sync
- Does Anyone Know What Tool is used to Make These TikTok Videos?
-
How did he do this?
Probably with using this https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model
-
Animate your stable diffusion portraits
Use https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-ModelHuggingface demo: https://huggingface.co/spaces/CVPR/Image-Animation-using-Thin-Plate-Spline-Motion-ModelGoogle Colab: https://colab.research.google.com/drive/1DREfdpnaBhqISg0fuQlAAIwyGVn1loH_?usp=sharing
- SD + thin-plate-spline-motion-model
ebsynth
- EbSynth – Transform Video by Painting over a Single Frame
- Tips to hide waistband?
- Please react to this!
-
Removing text from video
The only AI I can think of would be EbSynth, which would still require some frames to be fixed in Photoshop, but might work.
- SO MUCH FAKERY! All keyframes created in Stable Diffusion. This is only FOUR keyframes and my temporal consistency method. The voice is an A.I. model I trained to override my own boring voice and make it a bit more like John Hurt. Reality will pop in at the end.
-
Blender + SD + EBSynth
EbSynth - Transform Video by Painting Over a Single Frame
-
Arima Kana OshinoKo Dance Video
Download ebsyth from https://ebsynth.com/ , install it. (Make sure your anit-virus did not block it, or else it won't works)
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
EbSynth (Used to interpolate/animate using painted-over or stylized keyframes from a driving video, à la Joel Haver) https://ebsynth.com/
-
Walking through worlds
There is something called ebsynth on GitHub: https://github.com/jamriska/ebsynth
-
Opinion on AI for animation (strictly creating inbetweens)?
Anything AI is going to be divisive, but I think you already have a pretty good grasp of the issue. Adjacent to this there is stuff like EBSynth https://ebsynth.com/ which rotoscopes video footage. Joel Haver uses it to make all his animations solo and has a video going over the process - https://www.youtube.com/watch?v=tq_KOmXyVDo
What are some alternatives?
first-order-model - This repository contains the source code for the paper First Order Motion Model for Image Animation
animegan2-pytorch - PyTorch implementation of AnimeGANv2
Wav2Lip - This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
neural-style-tf - TensorFlow (Python API) implementation of Neural Style
DFL-Colab - DeepFaceLab fork which provides IPython Notebook to use DFL with Google Colab
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
SadTalker - [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
material-maker - A procedural textures authoring and 3D model painting tool based on the Godot game engine
articulated-animation - Code for Motion Representations for Articulated Animation paper
ArtGAN - ArtGAN + WikiArt: This work presents a series of new approaches to improve GAN for conditional image synthesis and we name the proposed model as “ArtGAN”.
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
texture-synthesis - 🎨 Example-based texture synthesis written in Rust 🦀