GAN_stability
stylegan2-ada
GAN_stability | stylegan2-ada | |
---|---|---|
1 | 21 | |
909 | 1,784 | |
- | 0.2% | |
0.0 | 0.0 | |
over 4 years ago | 6 months ago | |
Jupyter Notebook | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GAN_stability
-
[R] A survey on generative adversarial networks: fundamentals and recent advances - Link to free zoom lecture by the researcher in comments
Which Training Methods for GANs do actually Converge? [ICML 2018] Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. arxiv: https://arxiv.org/abs/1801.04406 git: https://github.com/LMescheder/GAN_stability
stylegan2-ada
-
Getty Images will cease to accept all submissions created using AI generative models
If you smudge just a few locations I doubt it would fool a simple discriminator. You could also train a discriminator that is robust to post-processing by using augmentations. This was popular with StyleGAN models: https://github.com/NVlabs/stylegan2-ada
-
Someone posted my art on this subreddit and it reached the front page without credit, so I thought I'd post something myself
https://github.com/NVlabs/stylegan2-ada + clip guided diffusion
- [P] Play around with StyleGAN2 in your browser
-
AI will shape up the workflow of the future. Here's a simple implementation of NVidia's StyleGAN inside Blender!
StyleGAN2-ADA is a neural network good at learning styles from images, you can give it a dataset and 'learn' its style into a file (a trained model). In this example, I load a model and given a random seed, generate a random texture which is applied to the object's material.
-
How do you generate those latent walk animations?
You have to modify the code, it's line 60 in https://github.com/NVlabs/stylegan2-ada/blob/main/generate.py
- [D] Do I need to apply spectral norm to my embedding matrix when training a conditional W-GAN?
- Can I train a model on 100 images of homes and have it draw a couple "average" homes?
-
New 'The Sculpture 3'. 3d sculpting + neural network
no i dont.. but as for training - i just use default tf stylegan2-ada repo ( https://github.com/NVlabs/stylegan2-ada )
-
[R] EigenGAN: Layer-Wise Eigen-Learning for GANs
You should check stylegan-2 ada, it works on colab and can be trained less than 12 hours tensorflow implementation
- gamma
What are some alternatives?
awesome-pretrained-stylegan2 - A collection of pre-trained StyleGAN 2 models to download
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
stylegan2_pytorch - A Pytorch implementation of StyleGAN2
clip-guided-diffusion - A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.
stylegan2 - StyleGAN2 - Official TensorFlow Implementation
LiminalGan - A stylegan2 model trained on liminal space images
EigenGAN-Tensorflow - EigenGAN: Layer-Wise Eigen-Learning for GANs (ICCV 2021)
ziyadedher - 🔥🧠Exclusive behind-the-scenes for ziyadedher.com!
stable-diffusion-webui - Stable Diffusion web UI