stylegan2
Wav2Lip
Our great sponsors
stylegan2 | Wav2Lip | |
---|---|---|
40 | 34 | |
10,753 | 9,208 | |
0.2% | - | |
0.0 | 5.0 | |
about 1 year ago | 6 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stylegan2
-
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
I don't know. If you're really curious, you can just try it: https://github.com/NVlabs/stylegan2
-
Used thispersondoesnotexist.com, then expanded it with DALL-E
StyleGAN2 (Dec 2019) - Karras et al. and Nvidia
-
Show HN: Food Does Not Exist
> The denoising part of a denoising autoencoder refers to the noise applied to its input
Agree, it converts a noisy image to a denoised image. But the odd thing is, when you put a noisy image into a StyleGAN2 encoder, you get latents which the decoder will turn into a de-noised image. So in practical use, you can take a trained StyleGAN2 encoder/decoder pair and use it as if it was a denoiser.
> These differences lead to learned distributions in the latent space that are entirely different
I also agree there. The training for a denoising auto-encoder and for a GAN network is different, leading to different distributions which are sampled for generating the images. But the architecture is still very similar, meaning the limits of what can be learned should be the same.
> Beyond that the comparison just doesn't work, yes there are two networks but the discriminator doesn't play the role of the AE's encoder at all
Yes, the discriminator in a GAN won't work like an encoder. But if you look at how StyleGAN 1/2 are used in practice, people combine it with a so-called "projection", which is effectively an encoder to convert images to latents. So people use a pipeline of "image to latent encoder" + "latent to image decoder".
That whole pipeline is very similar to an auto-encoder. For example, here's an NVIDIA paper about how they round-trip from image to latent to image with StyleGAN: https://arxiv.org/abs/1912.04958 My interpretation of what they did in that paper is that they effectively trained a StyleGAN-like model with the image L2 loss typically used for training a denoising auto-encoder.
- "Why yes I totally believe the 'Xinjiang Police Files', they got photos of REAL (100% not AI generated) detainees!"
-
How did they code Viola AI (face to cartoon)
These problems are usually done with CNN Encoder-Decoder frameworks. Usually GAN (Generative Adversarial Networks see StyleGan2).
-
AI morphs many faces together to all sing Scatman
This is the result of two different models. The first looks like a latent space interpolation of StyleGan2 and the mouth movements are without a doubt from wav2lip.
-
What A.I. tool is this?
OP: if you want to run this at higher resolution, you should probably look at running it yourself, using something like this: https://github.com/NVlabs/stylegan2
-
Imagined ML model deployment on normal machine, is it possible?
StyleGAN2 (Dec 2019) - Karras et al. and Nvidia
-
I'm implementing StyleGAN2 with Keras. I was worried it wasn't working, but after some 300K training steps it's finally starting to converge. (+ plot of what the first (4x4) part looks like)
A few of you might've seen an earlier post of mine about this project (Or the repost that got more upvotes 🙃), and I've improved the code and network since then after more thoroughly reading and understanding the official StyleGAN2 implementation.
-
Is it just me or has Google Colab Pro become a lot more restrictive lately?
So I've been a Pro+ subscriber since around November which I mainly use to train GANs. I have multiple Google accounts, let's call them Account 1, 2, and 3. Accounts 1 and 2 are normal Google accounts and Account 3 is an account I got from my university after I graduated which has unlimited storage.
Wav2Lip
-
Show HN: Sync (YC W22) – an API for fast and affordable lip-sync at scale
Hey HN, we’re sync. (https://synclabs.so/). We’re building fast + lightweight audio-visual models to create, modify, and understand humans in video.
You can check our more about us and our company in this video here: https://bit.ly/3TV27rd
Our first api lets you lip-sync a person in a video to an audio in any language in zero-shot. You can check out some examples here (https://bit.ly/3IT3UXk)
Here’s a demo showing how it works and how to sync your first video / audio: https://bit.ly/4ablRwo
Our playground + api is live, you can play with our models here: https://app.synclabs.so/
Four years ago we open-sourced Wav2lip (https://github.com/Rudrabha/Wav2Lip), the first model to lipsync anyone to any audio w/o having to train for each speaker. Even now, it’s the most prolific lipsyncing model to date (almost 9k GitHub stars).
Human lip-sync enables interesting features for many products – you can use it to seamlessly translate videos from one language to another, create personalized ads / video messages to send to your customers, or clone yourself so you never have to record a piece of content again.
We’re excited about this area of research / the models we’re building because they can be impactful in many ways:
[1] we can dissolve language as a barrier
check out how we used it to dub the entire 2-hour Tucker Carlson interview with Putin speaking fluent English: https://vimeo.com/914605299
imagine millions gaining access to knowledge, entertainment, and connection — regardless of their native tongue.
realtime at the edge takes us further — live multilingual broadcasts + video calls, even walking around Tokyo w/ a Vision Pro 2 speaking English while everyone else Japanese.
[2] we can move the human-computer interface beyond text-based-chat
keyboard / mice are lossy + low bandwidth. human communication is rich and goes beyond just the words we say. what if we could compute w/ a face-to-face interaction?
Many people get carried away w/ the fact LLMs can generate, but forget they can also read. The same is true for these audio/visual models — generation unlocks a portion of the use-cases, but understanding humans from video unlocks huge potential.
Embedding context around expressions + body language in inputs / outputs would help us interact w/ computers in a more human way.
[3] and more
powerful models small enough to run at the edge could unlock a lot:
eg.
-
Ideas to recreate audio
If your technically inclined you can use https://github.com/Rudrabha/Wav2Lip to sync the lip movements to the new audio.
-
How to make deep fake lip sync using Wav2Lip
This is the Github link : https://github.com/Rudrabha/Wav2Lip
-
Dark Brandon going hard
Video mapping onto Audio: Now you have Audio with coherent back and forth dialogue. To get the looped video puppets, you find a relatively stable interview clip (in this channel and many of Athenes other ones, the clips of the people just stay in one place). Then feed the audio + video clip into a lipsync algorithm like this https://bhaasha.iiit.ac.in/lipsync/
- Is it possible to sync a lip and facial expression animation with audio in real time?
-
A little bedtime story by the AI nanny | Stable Diffusion + GPT = a match made in latent space
It's not animating really, just lip sync and face restoration, here I used: https://github.com/Rudrabha/Wav2Lip and https://github.com/TencentARC/GFPGAN respectively.
-
Elevenlabs voice clone and janky avatarify with wav2lip added.
I just used the web based wav2lip demo. https://bhaasha.iiit.ac.in/lipsync/ Haven’t used the plan in a while, however the colab gives much better results. This was just a quick dusty example done all in the phone.
- retromash - The Tide is High / Thinking Out Loud (Blondie, Ed Sheeran)
-
Who knows how to create long-form & cheap AI avatar content? The three main platforms (Synthesia, Movio, & D-ID) all charge over $20 a month for ~ 15 minutes of content, but this TikTok user streamed for 90 hours… how did he pull that off?
https://github.com/Rudrabha/Wav2Lip Demo: https://youtu.be/0fXaDCZNOJc
- Video editing with AI
What are some alternatives?
stylegan - StyleGAN - Official TensorFlow Implementation
Thin-Plate-Spline-Motion-Model - [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
pix2pix - Image-to-image translation with conditional adversarial nets
first-order-model - This repository contains the source code for the paper First Order Motion Model for Image Animation
stylegan2-ada - StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation
chatgpt-raycast - ChatGPT raycast extension
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
DeepFaceLive - Real-time face swap for PC streaming or video calls
lightweight-gan - Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two
GFPGAN - GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.
lucid-sonic-dreams
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time