stylegan2ada
StyleGAN2-ada for practice (by eps696)
maua-stylegan2
This is the repo for my experiments with StyleGAN2. There are many like it, but this one is mine. Contains code for the paper Audio-reactive Latent Interpolations with StyleGAN. (by JCBrouwer)
stylegan2ada | maua-stylegan2 | |
---|---|---|
3 | 2 | |
174 | 179 | |
- | - | |
5.3 | 0.0 | |
29 days ago | almost 3 years ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stylegan2ada
Posts with mentions or reviews of stylegan2ada.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-02.
-
GANs Specialization review please
This is the easiest to train StyleGAN that I have found. StyleGAN3 and the official Pytorch StyleGAN variants from Nvidia just are horribly difficult to train. Training your own GAN model is a pretty good way to learn about them, and this is a pretty easy starting point if you are already a developer and understand your way around a command line. You can generate a dataset using Stable Diffusion of about 5000 images and train a GAN model from scratch on a single RTX 3090 in about 16 hours: https://github.com/eps696/stylegan2ada
-
[R] StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN
Vadim Epstein had multi-latent blending working at least in December last year (although his repo was published a little later).
-
Pretrained 1792x1024 StyleGAN2 model
You don't even need to really do model surgery. All the convolutions will accept arbitrary dimensions. You can just use network bending padding operations to get any output size you like Vadim Epstein's repo does something slightly different which let's you even use different latents per section: https://github.com/eps696/stylegan2ada Or mine which has the simpler, single latent version https://github.com/JCBrouwer/maua-stylegan2 Or for training, then all you have to do is change the size of your constant layer Or just graft on some more upsamples Either way though, there's not too much point to training at weird rectangular resolutions. You'll get pretty much identical results by just forcefully resizing to a square and then stretching the generated versions back out to square Unless you've got a ridiculous amount of VRAM, larger models don't really make too much sense either. Especially because it'll be hard to find 10k images at such a big resolution
maua-stylegan2
Posts with mentions or reviews of maua-stylegan2.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-10-04.
-
I'm stumped with installing PyTorch.
Originally I wanted to run https://github.com/JCBrouwer/maua-stylegan2. I was trying to run the convert_weight.py but it resulted in shape mismatch errors in torch torch.Size([1, 512, 4, 4]) vs torch.Size([1]), so I tried the version here https://github.com/rosinality/stylegan2-pytorch/blob/master/convert_weight.py and the result was the same.
-
Pretrained 1792x1024 StyleGAN2 model
You don't even need to really do model surgery. All the convolutions will accept arbitrary dimensions. You can just use network bending padding operations to get any output size you like Vadim Epstein's repo does something slightly different which let's you even use different latents per section: https://github.com/eps696/stylegan2ada Or mine which has the simpler, single latent version https://github.com/JCBrouwer/maua-stylegan2 Or for training, then all you have to do is change the size of your constant layer Or just graft on some more upsamples Either way though, there's not too much point to training at weird rectangular resolutions. You'll get pretty much identical results by just forcefully resizing to a square and then stretching the generated versions back out to square Unless you've got a ridiculous amount of VRAM, larger models don't really make too much sense either. Especially because it'll be hard to find 10k images at such a big resolution
What are some alternatives?
When comparing stylegan2ada and maua-stylegan2 you can also consider the following projects:
stylegan-matlab-playground - A MATLAB implmentation of the StyleGAN generator
stylegan2-pytorch - Implementation of Analyzing and Improving the Image Quality of StyleGAN (StyleGAN 2) in PyTorch
stylegan2-surgery - StyleGAN2 fork with scripts and convenience modifications for creative media synthesis
SOAT - Official PyTorch repo for StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN.
sd-webui-lobe-theme - 🅰️ Lobe theme - The modern theme for stable diffusion webui, exquisite interface design, highly customizable UI, and efficiency boosting features.