stylegan2-ada
ziyadedher
Our great sponsors
stylegan2-ada | ziyadedher | |
---|---|---|
21 | 2 | |
1,781 | 7 | |
0.4% | - | |
0.0 | 9.5 | |
6 months ago | 8 days ago | |
Python | TypeScript | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stylegan2-ada
-
Getty Images will cease to accept all submissions created using AI generative models
If you smudge just a few locations I doubt it would fool a simple discriminator. You could also train a discriminator that is robust to post-processing by using augmentations. This was popular with StyleGAN models: https://github.com/NVlabs/stylegan2-ada
-
Someone posted my art on this subreddit and it reached the front page without credit, so I thought I'd post something myself
https://github.com/NVlabs/stylegan2-ada + clip guided diffusion
- [P] Play around with StyleGAN2 in your browser
-
AI will shape up the workflow of the future. Here's a simple implementation of NVidia's StyleGAN inside Blender!
StyleGAN2-ADA is a neural network good at learning styles from images, you can give it a dataset and 'learn' its style into a file (a trained model). In this example, I load a model and given a random seed, generate a random texture which is applied to the object's material.
-
How do you generate those latent walk animations?
You have to modify the code, it's line 60 in https://github.com/NVlabs/stylegan2-ada/blob/main/generate.py
- [D] Do I need to apply spectral norm to my embedding matrix when training a conditional W-GAN?
- Can I train a model on 100 images of homes and have it draw a couple "average" homes?
-
New 'The Sculpture 3'. 3d sculpting + neural network
no i dont.. but as for training - i just use default tf stylegan2-ada repo ( https://github.com/NVlabs/stylegan2-ada )
-
[R] EigenGAN: Layer-Wise Eigen-Learning for GANs
You should check stylegan-2 ada, it works on colab and can be trained less than 12 hours tensorflow implementation
- gamma
ziyadedher
-
Play Around with StyleGAN2 in the Browser
I built a little page to run and manipulate StyleGAN2 in the browser.
https://ziyadedher.com/faces
It was pretty fun learning about ONNX and how to port GANs to web. You can play around with the random seeds and also distort the intermediate latents to produce some really wacky results. You can check out a GIF on Twitter (https://twitter.com/ziyadedher/status/1477161436635795463?s=...).
Let me know if you come up with anything cool!
The source is kinda trash, but you can check it out at https://github.com/ziyadedher/ziyadedher.
-
[P] Play around with StyleGAN2 in your browser
Yep! But the code for this page in particular is really trash since I kinda just threw it together. https://github.com/ziyadedher/ziyadedher, you'll find the page in src/pages/hacks/darkarts.tsx (sorry for no direct links, I'm on my phone).
What are some alternatives?
awesome-pretrained-stylegan2 - A collection of pre-trained StyleGAN 2 models to download
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
stylegan2_pytorch - A Pytorch implementation of StyleGAN2
clip-guided-diffusion - A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.
stylegan2 - StyleGAN2 - Official TensorFlow Implementation
LiminalGan - A stylegan2 model trained on liminal space images
GAN_stability - Code for paper "Which Training Methods for GANs do actually Converge? (ICML 2018)"
EigenGAN-Tensorflow - EigenGAN: Layer-Wise Eigen-Learning for GANs (ICCV 2021)
stable-diffusion-webui - Stable Diffusion web UI