stylegan2
ffhq-dataset
Our great sponsors
stylegan2 | ffhq-dataset | |
---|---|---|
40 | 13 | |
10,753 | 3,447 | |
0.2% | 0.0% | |
0.0 | 0.0 | |
about 1 year ago | over 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stylegan2
-
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
I don't know. If you're really curious, you can just try it: https://github.com/NVlabs/stylegan2
-
Used thispersondoesnotexist.com, then expanded it with DALL-E
StyleGAN2 (Dec 2019) - Karras et al. and Nvidia
-
Show HN: Food Does Not Exist
> The denoising part of a denoising autoencoder refers to the noise applied to its input
Agree, it converts a noisy image to a denoised image. But the odd thing is, when you put a noisy image into a StyleGAN2 encoder, you get latents which the decoder will turn into a de-noised image. So in practical use, you can take a trained StyleGAN2 encoder/decoder pair and use it as if it was a denoiser.
> These differences lead to learned distributions in the latent space that are entirely different
I also agree there. The training for a denoising auto-encoder and for a GAN network is different, leading to different distributions which are sampled for generating the images. But the architecture is still very similar, meaning the limits of what can be learned should be the same.
> Beyond that the comparison just doesn't work, yes there are two networks but the discriminator doesn't play the role of the AE's encoder at all
Yes, the discriminator in a GAN won't work like an encoder. But if you look at how StyleGAN 1/2 are used in practice, people combine it with a so-called "projection", which is effectively an encoder to convert images to latents. So people use a pipeline of "image to latent encoder" + "latent to image decoder".
That whole pipeline is very similar to an auto-encoder. For example, here's an NVIDIA paper about how they round-trip from image to latent to image with StyleGAN: https://arxiv.org/abs/1912.04958 My interpretation of what they did in that paper is that they effectively trained a StyleGAN-like model with the image L2 loss typically used for training a denoising auto-encoder.
- "Why yes I totally believe the 'Xinjiang Police Files', they got photos of REAL (100% not AI generated) detainees!"
-
How did they code Viola AI (face to cartoon)
These problems are usually done with CNN Encoder-Decoder frameworks. Usually GAN (Generative Adversarial Networks see StyleGan2).
-
AI morphs many faces together to all sing Scatman
This is the result of two different models. The first looks like a latent space interpolation of StyleGan2 and the mouth movements are without a doubt from wav2lip.
-
What A.I. tool is this?
OP: if you want to run this at higher resolution, you should probably look at running it yourself, using something like this: https://github.com/NVlabs/stylegan2
-
Imagined ML model deployment on normal machine, is it possible?
StyleGAN2 (Dec 2019) - Karras et al. and Nvidia
-
I'm implementing StyleGAN2 with Keras. I was worried it wasn't working, but after some 300K training steps it's finally starting to converge. (+ plot of what the first (4x4) part looks like)
A few of you might've seen an earlier post of mine about this project (Or the repost that got more upvotes 🙃), and I've improved the code and network since then after more thoroughly reading and understanding the official StyleGAN2 implementation.
-
Is it just me or has Google Colab Pro become a lot more restrictive lately?
So I've been a Pro+ subscriber since around November which I mainly use to train GANs. I have multiple Google accounts, let's call them Account 1, 2, and 3. Accounts 1 and 2 are normal Google accounts and Account 3 is an account I got from my university after I graduated which has unlimited storage.
ffhq-dataset
- [SD 1.5] Swizz8-REAL is now available.
-
[R] How do paper authors deal with takedown requests?
Datasets like FFHQ consist of face images crawled from the Internet. While those images are published under CC licenses, the authors usually have not obtained consent from each person depicted in those images. I guess that's why they are taking takedown requests: People can send requests to remove their faces from the dataset.
- Collecting dataset
-
Artificial faces are more likely to be perceived as real faces than real faces
The real ones were taken from this dataset.
-
This sub is misrepresenting “Anti-AI” artists
NVIDIA's FFHQ says "Only images under permissive licenses were collected." https://github.com/NVlabs/ffhq-dataset
- Open image set of a non-celebrity that can be used for demoing Stable Diffusion tuning?
-
[D] Does anyone have a copy of the FFHQ 1024 scale images (90GB) ? and or a copy of the FFHQ Wild images (900GB) ?
The FFHQ dataset https://github.com/NVlabs/ffhq-dataset is a high quality, high resolution, and extremely well curated dataset that is used in many recent SOTA GAN papers and also has applications in many other areas.
- [N] [P] Access 100+ image, video & audio datasets in seconds with one line of code & stream them while training ML models with Activeloop Hub (more at docs.activeloop.ai, description & links in the comments below)
-
[P] Training StyleGAN2 in Jax (FFHQ and Anime Faces)
I trained on FFHQ and Danbooru2019 Portraits with resolution 512x512.
-
Facebook apology as AI labels black men 'primates'
> Which makes it an inexcusable mistake to make in 2021 - how are you not testing for this?
They probably are, but not good enough. These things can be surprisingly hard to detect. Post hoc it is easy to see the bias, but it isn't so easy before you deploy the models.
If we take racial connotations out of it then we could say that the algorithm is doing quite well because it got the larger hierarchical class correct, primate. The algorithm doesn't know the racial connotations, it just knows the data and what metric you were seeking. BUT considering the racial and historical context this is NOT an acceptable answer (not even close).
I've made a few comments in the past about bias and how many machine learning people are deploying models without understanding them. This is what happens when you don't try to understand statistics and particularly long tail distributions. gumboshoes mentioned that Google just removed the primate type labels. That's a solution, but honestly not a great one (technically speaking). But this solution is far easier than technically fixing the problem (I'd wager that putting a strong loss penalty for misclassifiying a black person as an ape is not enough). If you follow the links from jcims then you might notice that a lot of those faces are white. Would it be all that surprising if Google trained from the FFHQ (Flickr) Dataset?[0] A dataset known to have a strong bias towards white faces. We actually saw that when Pulse[1] turned Obama white (do note that if you didn't know the left picture was a black person and who they were that this is a decent (key word) representation). So it is pretty likely that _some_ problems could simply be fixed by better datasets (This part of the LeCunn controversy last year).
Though datasets aren't the only problems here. ML can algorithmically highlight bias in datasets. Often research papers are metric hacking, or going for the highest accuracy that they can get[2]. This leaderboardism undermines some of the usage and often there's a disconnect between researchers and those in production. With large and complex datasets we might be targeting leaderboard scores until we have a sufficient accuracy on that dataset before we start focusing on bias on that dataset (or more often we, sadly, just move to a more complex dataset and start the whole process over again). There's not many people working on the biased aspects of ML systems (both in data bias and algorithmic bias), but as more people are putting these tools into production we're running into walls. Many of these people are not thinking about how these models are trained or the bias that they contain. They go to the leaderboard and pick the best pre-trained model and hit go, maybe tuning on their dataset. Tuning doesn't eliminate the bias in the pre-training (it can actually amplify it!). ~~Money~~Scale is NOT all you need, as GAMF often tries to sell. (or some try to sell augmentation as all you need)
These problems won't be solved without significant research into both data and algorithmic bias. They won't be solved until those in production also understand these principles and robust testing methods are created to find these biases. Until people understand that a good ImageNet (or even JFT-300M) score doesn't mean your model will generalize well to real world data (though there is a correlation).
So with that in mind, I'll make a prediction that rather than seeing fewer cases of these mistakes rather we're going to see more (I'd actually argue that there's a lot of this currently happening that you just don't see). The AI hype isn't dying down and more people are entering that don't want to learn the math. "Throw a neural net at it" is not and never will be the answer. Anyone saying that is selling snake oil.
I don't want people to think I'm anti-ML. In fact I'm a ML researcher. But there's a hard reality we need to face in our field. We've made a lot of progress in the last decade that is very exciting, but we've got a long way to go as well. We can't just have everyone focusing on leaderboard scores and expect to solve our problems.
[0] https://github.com/NVlabs/ffhq-dataset
[1] https://twitter.com/Chicken3gg/status/1274314622447820801
[2] https://twitter.com/emilymbender/status/1434874728682901507
What are some alternatives?
Wav2Lip - This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
Activeloop Hub - Data Lake for Deep Learning. Build, manage, query, version, & visualize datasets. Stream data real-time to PyTorch/TensorFlow. https://activeloop.ai [Moved to: https://github.com/activeloopai/deeplake]
stylegan - StyleGAN - Official TensorFlow Implementation
pix2pix - Image-to-image translation with conditional adversarial nets
flaxmodels - Pretrained deep learning models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet, etc.
stylegan2-ada - StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
lightweight-gan - Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two
lucid-sonic-dreams
awesome-pretrained-stylegan2 - A collection of pre-trained StyleGAN 2 models to download
LiminalGan - A stylegan2 model trained on liminal space images