stylegan2
awesome-pretrained-stylegan2
Our great sponsors
stylegan2 | awesome-pretrained-stylegan2 | |
---|---|---|
40 | 7 | |
10,753 | 1,247 | |
0.2% | - | |
0.0 | 1.8 | |
about 1 year ago | almost 2 years ago | |
Python | Python | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stylegan2
-
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
I don't know. If you're really curious, you can just try it: https://github.com/NVlabs/stylegan2
-
Used thispersondoesnotexist.com, then expanded it with DALL-E
StyleGAN2 (Dec 2019) - Karras et al. and Nvidia
-
Show HN: Food Does Not Exist
> The denoising part of a denoising autoencoder refers to the noise applied to its input
Agree, it converts a noisy image to a denoised image. But the odd thing is, when you put a noisy image into a StyleGAN2 encoder, you get latents which the decoder will turn into a de-noised image. So in practical use, you can take a trained StyleGAN2 encoder/decoder pair and use it as if it was a denoiser.
> These differences lead to learned distributions in the latent space that are entirely different
I also agree there. The training for a denoising auto-encoder and for a GAN network is different, leading to different distributions which are sampled for generating the images. But the architecture is still very similar, meaning the limits of what can be learned should be the same.
> Beyond that the comparison just doesn't work, yes there are two networks but the discriminator doesn't play the role of the AE's encoder at all
Yes, the discriminator in a GAN won't work like an encoder. But if you look at how StyleGAN 1/2 are used in practice, people combine it with a so-called "projection", which is effectively an encoder to convert images to latents. So people use a pipeline of "image to latent encoder" + "latent to image decoder".
That whole pipeline is very similar to an auto-encoder. For example, here's an NVIDIA paper about how they round-trip from image to latent to image with StyleGAN: https://arxiv.org/abs/1912.04958 My interpretation of what they did in that paper is that they effectively trained a StyleGAN-like model with the image L2 loss typically used for training a denoising auto-encoder.
- "Why yes I totally believe the 'Xinjiang Police Files', they got photos of REAL (100% not AI generated) detainees!"
-
How did they code Viola AI (face to cartoon)
These problems are usually done with CNN Encoder-Decoder frameworks. Usually GAN (Generative Adversarial Networks see StyleGan2).
-
AI morphs many faces together to all sing Scatman
This is the result of two different models. The first looks like a latent space interpolation of StyleGan2 and the mouth movements are without a doubt from wav2lip.
-
What A.I. tool is this?
OP: if you want to run this at higher resolution, you should probably look at running it yourself, using something like this: https://github.com/NVlabs/stylegan2
-
Imagined ML model deployment on normal machine, is it possible?
StyleGAN2 (Dec 2019) - Karras et al. and Nvidia
-
I'm implementing StyleGAN2 with Keras. I was worried it wasn't working, but after some 300K training steps it's finally starting to converge. (+ plot of what the first (4x4) part looks like)
A few of you might've seen an earlier post of mine about this project (Or the repost that got more upvotes 🙃), and I've improved the code and network since then after more thoroughly reading and understanding the official StyleGAN2 implementation.
-
Is it just me or has Google Colab Pro become a lot more restrictive lately?
So I've been a Pro+ subscriber since around November which I mainly use to train GANs. I have multiple Google accounts, let's call them Account 1, 2, and 3. Accounts 1 and 2 are normal Google accounts and Account 3 is an account I got from my university after I graduated which has unlimited storage.
awesome-pretrained-stylegan2
-
List of sites/programs/projects that use OpenAI's CLIP neural network for steering image/video creation to match a text description
Many of the items on the first list below are Google Colaboratory ("Colab") notebooks, which run in a web browser; for more info, see the Google Colab FAQ. Some Colab notebooks create output files in the remote computer's file system; these files can be accessed by clicking the Files icon in the left part of the Colab window. For the BigGAN image generators on the first list that allow the initial class (i.e. type of object) to be specified, here is a list of the 1,000 BigGAN classes. For the StyleGAN image generators on the first list that allow the specification of the StyleGAN2 .pkl file, here is a list of them.
-
[TRYPOPHOBIA WARNING]: Lucid Sonic Nightmares.
There are so many ways to work with AI art, the best way if you don't know any code or AI library is to use a collab paper, i used sonic lucid dream which is a video generator based on Stylegan 2 in which i used a pretrained stylegan model called Trypohobia and a song from some wierd album i digged, a full list of pretrained models can be found here, i setup my own from their github repo, but you can use this collab paper to try it yourself using google's GPUs, minimal knowledge about python and GAN theory, that's an hour read max.Have fun !
-
Quick and Easy GAN Domain Adaptation explained: Sketch Your Own GAN by Sheng-Yu Wang et al. 5 minute summary
Hi! That's the point, you actually don't need an entire dataset. Just a pretrained generator and a few sketches of the poses that you want to generate! For example, you can take any model from https://github.com/justinpinkney/awesome-pretrained-stylegan2 sketch a couple of target images and apply the "Sketch Your Own GAN" method. If you have any more questions, I'll try to answer them.
-
Pre-trained StyleGAN2 model
For some more good pretrained StyleGAN2 weights: https://github.com/justinpinkney/awesome-pretrained-stylegan2 (unfortunately some of the download links are dead though)
-
Synthetic Pink Floyd
I suspect it was WikiArt from justinpinkey's StyleGAN2 collection.
-
[P] Stylegan on ~5k images
I found this page after a quick google search https://github.com/justinpinkney/awesome-pretrained-stylegan2 but if this one doesn't work there are others. You can also just use StyleGan (v1) and have great results, I'm not sure that the v2 is much better.
-
How to make a pretrained StyleGan model?
like these models: https://github.com/justinpinkney/awesome-pretrained-stylegan2
What are some alternatives?
Wav2Lip - This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
stylegan2-ada - StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation
stylegan - StyleGAN - Official TensorFlow Implementation
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
pix2pix - Image-to-image translation with conditional adversarial nets
dl-colab-notebooks - Try out deep learning models online on Google Colab
ml-art-colabs - A list of Machine Learning Art Colabs
lightweight-gan - Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two
Awesome-Text-to-Image - (ෆ`꒳´ෆ) A Survey on Text-to-Image Generation/Synthesis.