LIA
VToonify
Our great sponsors
LIA | VToonify | |
---|---|---|
7 | 16 | |
561 | 3,461 | |
- | - | |
4.8 | 1.0 | |
6 months ago | 6 months ago | |
Python | Jupyter Notebook | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LIA
- Deepfakes in High-Resolution Created From a Single Photo
- d-id.com is an awesome AI tool to animate any character into a video and add human-like, yet artificial, voice-overs !
-
Soup from a stone. Creating a Dreambooth model with just 1 image.
try running a character through LIA (its not as wobbly as thin spline), extract the frames and batch process them in codeformer (-w 0.9-1.1 so you don't lose identity). pick out of those
-
MegaPortraits: High-Res Deepfakes Created From a Single Photo
Older Latent Image version: https://github.com/wyhsirius/LIA
-
Animating generated faces
I used this repo https://github.com/wyhsirius/LIA for the animation and RealESRGAN for upscaling the video
- Stacking AI on top of AI until I get a feature film
- Latent Image Animator
VToonify
- FLiP Stack Weekly for 21 Jan 2023
-
AI generated video: Best framework by Theo
this is a VToonify you can chek this here https://github.com/williamyang1991/VToonify
- FLiP Stack Weekly for 15-Jan-2023
- VToonify: Controllable high-resolution portrait video style transfer
- VToonify: Controllable High-Resolution Portrait Video Style Transfer
- VToonify - Controllable High-Resolution Portrait Video Style Transfer
-
Hold on to your papers!
Here is the software repository
-
Soup from a stone. Creating a Dreambooth model with just 1 image.
Vtoonify is a machine learning model (GAN) created that unlike stable diffusion always reproduces the same output given the same input. For example my character Midas went in looking like the right, can came out looking like the left.
What are some alternatives?
Thin-Plate-Spline-Motion-Model-Windows - [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
ChatGPT - 🔮 ChatGPT Desktop Application (Mac, Windows and Linux)
first-order-model - This repository contains the source code for the paper First Order Motion Model for Image Animation
nicegui - Create web-based user interfaces with Python. The nice way.
storydalle
awk-raycaster - Pseudo-3D shooter written completely in gawk using raycasting technique
FLiPStackWeekly - FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...
setfit - Efficient few-shot learning with Sentence Transformers
motion-diffusion-model - The official PyTorch implementation of the paper "Human Motion Diffusion Model"
git-re-basin - Code release for "Git Re-Basin: Merging Models modulo Permutation Symmetries"
Text2Light - [SIGGRAPH Asia 2022] Text2Light: Zero-Shot Text-Driven HDR Panorama Generation
whisper - Robust Speech Recognition via Large-Scale Weak Supervision