Our great sponsors
-
first-order-model
This repository contains the source code for the paper First Order Motion Model for Image Animation
-
Wav2Lip
This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
First Order Motion Model (Easy): You can give it an input image and a source video, and the movements from the video will be copied to the image. It has some janky-ness and I would recommend using an ai upscaller afterwards because the resolution isn't that high https://github.com/AliaksandrSiarohin/first-order-model
Wav2Lip (semi - easy): You give it an input video and prerecorded audio, it will lip-sync the audio to the video. not sure if you can do it with an image. https://github.com/Rudrabha/Wav2Lip
Related posts
- Help from Community [Development]
- Okay, that's Ai but how?
- Is it possible to sync a lip and facial expression animation with audio in real time?
- GitHub - yoyo-nb/Thin-Plate-Spline-Motion-Model: [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation. (Question: How do I increase the resolution of the output?)
- Elvis & James Blunt singing together - doing Elvis voice synthesis & using Thin-Plate-Spline model for a cheap fast deepfake video to sync