stylegan2
LiminalGan
Our great sponsors
stylegan2 | LiminalGan | |
---|---|---|
40 | 3 | |
10,753 | 8 | |
0.5% | - | |
0.0 | 4.2 | |
12 months ago | almost 3 years ago | |
Python | Jupyter Notebook | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stylegan2
-
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
I don't know. If you're really curious, you can just try it: https://github.com/NVlabs/stylegan2
-
Show HN: Food Does Not Exist
> The denoising part of a denoising autoencoder refers to the noise applied to its input
Agree, it converts a noisy image to a denoised image. But the odd thing is, when you put a noisy image into a StyleGAN2 encoder, you get latents which the decoder will turn into a de-noised image. So in practical use, you can take a trained StyleGAN2 encoder/decoder pair and use it as if it was a denoiser.
> These differences lead to learned distributions in the latent space that are entirely different
I also agree there. The training for a denoising auto-encoder and for a GAN network is different, leading to different distributions which are sampled for generating the images. But the architecture is still very similar, meaning the limits of what can be learned should be the same.
> Beyond that the comparison just doesn't work, yes there are two networks but the discriminator doesn't play the role of the AE's encoder at all
Yes, the discriminator in a GAN won't work like an encoder. But if you look at how StyleGAN 1/2 are used in practice, people combine it with a so-called "projection", which is effectively an encoder to convert images to latents. So people use a pipeline of "image to latent encoder" + "latent to image decoder".
That whole pipeline is very similar to an auto-encoder. For example, here's an NVIDIA paper about how they round-trip from image to latent to image with StyleGAN: https://arxiv.org/abs/1912.04958 My interpretation of what they did in that paper is that they effectively trained a StyleGAN-like model with the image L2 loss typically used for training a denoising auto-encoder.
-
AI morphs many faces together to all sing Scatman
This is the result of two different models. The first looks like a latent space interpolation of StyleGan2 and the mouth movements are without a doubt from wav2lip.
-
Imagined ML model deployment on normal machine, is it possible?
Code for training your own [original] [simple] [light]
StyleGAN2 (Dec 2019) - Karras et al. and Nvidia
- [D] Lazy regularization for WGAN-GP training.
-
Open source/commercially available face generator?
Check the license here, but it looks like anything based on stylegan2 can't be used commercially. Google brought me to this page which seems to also be based on stylegan, but I guess they are paying for a commercial license for it? So you could use it or maybe even reach out to nvidia directly.
-
Ask HN: What useful unknown website do you wish more people knew about?
StyleGAN2 (Dec 2019) - Karras et al. and Nvidia. More: https://github.com/NVlabs/stylegan2)
2. https://www.deepl.com/translator (translate text in 20 languages including idioms and phrases)
3. https://remove.bg/ (remove any background)
4. regex101.com (self explanatory)
5. Photopea.com (a free web-based Photoshop alternative
6. https://tineye.com/ , http://fotoforensics.com/ Do you want to know if an image is shopped, cropped or otherwise altered? Using these two tools you've got a good chance of finding out. Tineye is reverse image search on steroids and foto forensics provides free image analysis tools:
7. https://thenounproject.com/ (Icons and Photos for everything)
-
AI generated image for "Kerbal Space Program".
If you’d like to know more, here’s a video explaining the method and the paper: https://arxiv.org/abs/1912.04958
-
Pre-trained StyleGAN2 model
The implementation and trained models are available on the StyleGAN2 GitHub repo.
LiminalGan
-
An interpolation from an AI trained on liminal images
Center crop / crop the images to be square and filter them so they are a consistent resolution, to do this i used this code here: https://github.com/limgan/LiminalGan/blob/main/center_crop_images.py. The usage is make_dataset(in_dir, out_dir, resolution)
https://github.com/limgan/LiminalGan is where im trying to make it workable, its sorta diffcult with the models being over 2gb
What are some alternatives?
Wav2Lip - This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
stylegan - StyleGAN - Official TensorFlow Implementation
pix2pix - Image-to-image translation with conditional adversarial nets
stylegan2-ada - StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
lightweight-gan - Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two
lucid-sonic-dreams
ffhq-dataset - Flickr-Faces-HQ Dataset (FFHQ)
awesome-pretrained-stylegan2 - A collection of pre-trained StyleGAN 2 models to download
stylegan2-generated-image - High-resolution image generation results in a hostile generation network using StyleGAN2
waifu2x - Image Super-Resolution for Anime-Style Art