stylegan2-ada
clip-guided-diffusion
Our great sponsors
stylegan2-ada | clip-guided-diffusion | |
---|---|---|
21 | 5 | |
1,781 | 440 | |
0.4% | - | |
0.0 | 1.8 | |
6 months ago | about 2 years ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stylegan2-ada
-
Getty Images will cease to accept all submissions created using AI generative models
If you smudge just a few locations I doubt it would fool a simple discriminator. You could also train a discriminator that is robust to post-processing by using augmentations. This was popular with StyleGAN models: https://github.com/NVlabs/stylegan2-ada
-
Someone posted my art on this subreddit and it reached the front page without credit, so I thought I'd post something myself
https://github.com/NVlabs/stylegan2-ada + clip guided diffusion
- [P] Play around with StyleGAN2 in your browser
-
AI will shape up the workflow of the future. Here's a simple implementation of NVidia's StyleGAN inside Blender!
StyleGAN2-ADA is a neural network good at learning styles from images, you can give it a dataset and 'learn' its style into a file (a trained model). In this example, I load a model and given a random seed, generate a random texture which is applied to the object's material.
-
How do you generate those latent walk animations?
You have to modify the code, it's line 60 in https://github.com/NVlabs/stylegan2-ada/blob/main/generate.py
- [D] Do I need to apply spectral norm to my embedding matrix when training a conditional W-GAN?
- Can I train a model on 100 images of homes and have it draw a couple "average" homes?
-
New 'The Sculpture 3'. 3d sculpting + neural network
no i dont.. but as for training - i just use default tf stylegan2-ada repo ( https://github.com/NVlabs/stylegan2-ada )
-
[R] EigenGAN: Layer-Wise Eigen-Learning for GANs
You should check stylegan-2 ada, it works on colab and can be trained less than 12 hours tensorflow implementation
- gamma
clip-guided-diffusion
-
[D] Which GAN is Jon Rafman using?
According to his bio he uses "clip-guided diffusion". Never heard of it before, but it appears to not use GANs. Text model and image classifier.
-
Someone posted my art on this subreddit and it reached the front page without credit, so I thought I'd post something myself
But yeah this software generates similar but to be fair not nearly as “aesthetic” gifs with a single terminal command and actually 0 photoshop.
-
AI-generated image for "ghost town at night"
I used CLIP guided diffusion to generate the image (see OpenAi CLIP).
-
Smoggy place. By AI
I used this https://github.com/afiaka87/clip-guided-diffusion. No reference at all only a prompt "Steampunk town"
-
Trying out new method of generating pixels from text
I used this method. It consumes about 8gb of vram and takes about 20 minutes to generate 1 image. You can also import it in colab. And if you get an unlucky seed, you have to reset the timer and start crafting your items again.
What are some alternatives?
awesome-pretrained-stylegan2 - A collection of pre-trained StyleGAN 2 models to download
discoart - 🪩 Create Disco Diffusion artworks in one line
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
big-sleep - A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Technique was originally created by https://twitter.com/advadnoun
stylegan2_pytorch - A Pytorch implementation of StyleGAN2
blended-diffusion - Official implementation for "Blended Diffusion for Text-driven Editing of Natural Images" [CVPR 2022]
stylegan2 - StyleGAN2 - Official TensorFlow Implementation
LiminalGan - A stylegan2 model trained on liminal space images
GAN_stability - Code for paper "Which Training Methods for GANs do actually Converge? (ICML 2018)"
EigenGAN-Tensorflow - EigenGAN: Layer-Wise Eigen-Learning for GANs (ICCV 2021)
ziyadedher - 🔥🧠 Exclusive behind-the-scenes for ziyadedher.com!
stable-diffusion-webui - Stable Diffusion web UI