stylegan2 VS ffhq-dataset

Compare stylegan2 vs ffhq-dataset and see what are their differences.


StyleGAN2 - Official TensorFlow Implementation (by NVlabs)


Flickr-Faces-HQ Dataset (FFHQ) (by NVlabs)
Our great sponsors
  • Scout APM - A developer's best friend. Try free for 14-days
  • Nanos - Run Linux Software Faster and Safer than Linux with Unikernels
  • SaaSHub - Software Alternatives and Reviews
stylegan2 ffhq-dataset
25 4
8,738 2,282
1.9% 2.8%
2.9 2.4
about 1 month ago 24 days ago
Python Python
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.


Posts with mentions or reviews of stylegan2. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-10-29.


Posts with mentions or reviews of ffhq-dataset. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-09-12.
  • [P] Training StyleGAN2 in Jax (FFHQ and Anime Faces)
    2 projects | | 12 Sep 2021
    I trained on FFHQ and Danbooru2019 Portraits with resolution 512x512.
  • Facebook apology as AI labels black men 'primates'
    1 project | | 6 Sep 2021
    > Which makes it an inexcusable mistake to make in 2021 - how are you not testing for this?

    They probably are, but not good enough. These things can be surprisingly hard to detect. Post hoc it is easy to see the bias, but it isn't so easy before you deploy the models.

    If we take racial connotations out of it then we could say that the algorithm is doing quite well because it got the larger hierarchical class correct, primate. The algorithm doesn't know the racial connotations, it just knows the data and what metric you were seeking. BUT considering the racial and historical context this is NOT an acceptable answer (not even close).

    I've made a few comments in the past about bias and how many machine learning people are deploying models without understanding them. This is what happens when you don't try to understand statistics and particularly long tail distributions. gumboshoes mentioned that Google just removed the primate type labels. That's a solution, but honestly not a great one (technically speaking). But this solution is far easier than technically fixing the problem (I'd wager that putting a strong loss penalty for misclassifiying a black person as an ape is not enough). If you follow the links from jcims then you might notice that a lot of those faces are white. Would it be all that surprising if Google trained from the FFHQ (Flickr) Dataset?[0] A dataset known to have a strong bias towards white faces. We actually saw that when Pulse[1] turned Obama white (do note that if you didn't know the left picture was a black person and who they were that this is a decent (key word) representation). So it is pretty likely that _some_ problems could simply be fixed by better datasets (This part of the LeCunn controversy last year).

    Though datasets aren't the only problems here. ML can algorithmically highlight bias in datasets. Often research papers are metric hacking, or going for the highest accuracy that they can get[2]. This leaderboardism undermines some of the usage and often there's a disconnect between researchers and those in production. With large and complex datasets we might be targeting leaderboard scores until we have a sufficient accuracy on that dataset before we start focusing on bias on that dataset (or more often we, sadly, just move to a more complex dataset and start the whole process over again). There's not many people working on the biased aspects of ML systems (both in data bias and algorithmic bias), but as more people are putting these tools into production we're running into walls. Many of these people are not thinking about how these models are trained or the bias that they contain. They go to the leaderboard and pick the best pre-trained model and hit go, maybe tuning on their dataset. Tuning doesn't eliminate the bias in the pre-training (it can actually amplify it!). ~~Money~~Scale is NOT all you need, as GAMF often tries to sell. (or some try to sell augmentation as all you need)

    These problems won't be solved without significant research into both data and algorithmic bias. They won't be solved until those in production also understand these principles and robust testing methods are created to find these biases. Until people understand that a good ImageNet (or even JFT-300M) score doesn't mean your model will generalize well to real world data (though there is a correlation).

    So with that in mind, I'll make a prediction that rather than seeing fewer cases of these mistakes rather we're going to see more (I'd actually argue that there's a lot of this currently happening that you just don't see). The AI hype isn't dying down and more people are entering that don't want to learn the math. "Throw a neural net at it" is not and never will be the answer. Anyone saying that is selling snake oil.

    I don't want people to think I'm anti-ML. In fact I'm a ML researcher. But there's a hard reality we need to face in our field. We've made a lot of progress in the last decade that is very exciting, but we've got a long way to go as well. We can't just have everyone focusing on leaderboard scores and expect to solve our problems.




  • Innovative Technology NVIDIA StyleGAN2
    5 projects | | 15 Jun 2021
    To obtain the FFHQ dataset (datasets/ffhq), please refer to the Flickr-Faces-HQ repository.
  • [R] FFHQ Alignment Bug, Crooked Dataset
    1 project | | 11 Mar 2021
    In the landmark parsing section on line 268 you can see where groups of landmarks overlap by 1. They're all set to end on the first landmark in the next group, rather than ending on the landmark immediately before it. Here's a map of the points for reference.

What are some alternatives?

When comparing stylegan2 and ffhq-dataset you can also consider the following projects:

stylegan - StyleGAN - Official TensorFlow Implementation

pix2pix - Image-to-image translation with conditional adversarial nets


stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement

awesome-pretrained-stylegan2 - A collection of pre-trained StyleGAN 2 models to download

stylegan2-ada - StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation

waifu2x - Image Super-Resolution for Anime-Style Art

LiminalGan - A stylegan2 model trained on liminal space images

repology-rules - Package normalization ruleset for Repology

progressive_growing_of_gans - Progressive Growing of GANs for Improved Quality, Stability, and Variation

stylegan2-generated-image - High-resolution image generation results in a hostile generation network using StyleGAN2