awesome-pretrained-stylegan2
stylegan2-ada
Our great sponsors
awesome-pretrained-stylegan2 | stylegan2-ada | |
---|---|---|
7 | 21 | |
1,247 | 1,781 | |
- | 0.4% | |
1.8 | 0.0 | |
almost 2 years ago | 6 months ago | |
Python | Python | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
awesome-pretrained-stylegan2
-
List of sites/programs/projects that use OpenAI's CLIP neural network for steering image/video creation to match a text description
Many of the items on the first list below are Google Colaboratory ("Colab") notebooks, which run in a web browser; for more info, see the Google Colab FAQ. Some Colab notebooks create output files in the remote computer's file system; these files can be accessed by clicking the Files icon in the left part of the Colab window. For the BigGAN image generators on the first list that allow the initial class (i.e. type of object) to be specified, here is a list of the 1,000 BigGAN classes. For the StyleGAN image generators on the first list that allow the specification of the StyleGAN2 .pkl file, here is a list of them.
-
[TRYPOPHOBIA WARNING]: Lucid Sonic Nightmares.
There are so many ways to work with AI art, the best way if you don't know any code or AI library is to use a collab paper, i used sonic lucid dream which is a video generator based on Stylegan 2 in which i used a pretrained stylegan model called Trypohobia and a song from some wierd album i digged, a full list of pretrained models can be found here, i setup my own from their github repo, but you can use this collab paper to try it yourself using google's GPUs, minimal knowledge about python and GAN theory, that's an hour read max.Have fun !
-
Quick and Easy GAN Domain Adaptation explained: Sketch Your Own GAN by Sheng-Yu Wang et al. 5 minute summary
Hi! That's the point, you actually don't need an entire dataset. Just a pretrained generator and a few sketches of the poses that you want to generate! For example, you can take any model from https://github.com/justinpinkney/awesome-pretrained-stylegan2 sketch a couple of target images and apply the "Sketch Your Own GAN" method. If you have any more questions, I'll try to answer them.
-
Pre-trained StyleGAN2 model
For some more good pretrained StyleGAN2 weights: https://github.com/justinpinkney/awesome-pretrained-stylegan2 (unfortunately some of the download links are dead though)
-
Synthetic Pink Floyd
I suspect it was WikiArt from justinpinkey's StyleGAN2 collection.
-
[P] Stylegan on ~5k images
I found this page after a quick google search https://github.com/justinpinkney/awesome-pretrained-stylegan2 but if this one doesn't work there are others. You can also just use StyleGan (v1) and have great results, I'm not sure that the v2 is much better.
-
How to make a pretrained StyleGan model?
like these models: https://github.com/justinpinkney/awesome-pretrained-stylegan2
stylegan2-ada
-
Getty Images will cease to accept all submissions created using AI generative models
If you smudge just a few locations I doubt it would fool a simple discriminator. You could also train a discriminator that is robust to post-processing by using augmentations. This was popular with StyleGAN models: https://github.com/NVlabs/stylegan2-ada
-
Someone posted my art on this subreddit and it reached the front page without credit, so I thought I'd post something myself
https://github.com/NVlabs/stylegan2-ada + clip guided diffusion
- [P] Play around with StyleGAN2 in your browser
-
AI will shape up the workflow of the future. Here's a simple implementation of NVidia's StyleGAN inside Blender!
StyleGAN2-ADA is a neural network good at learning styles from images, you can give it a dataset and 'learn' its style into a file (a trained model). In this example, I load a model and given a random seed, generate a random texture which is applied to the object's material.
-
How do you generate those latent walk animations?
You have to modify the code, it's line 60 in https://github.com/NVlabs/stylegan2-ada/blob/main/generate.py
- [D] Do I need to apply spectral norm to my embedding matrix when training a conditional W-GAN?
- Can I train a model on 100 images of homes and have it draw a couple "average" homes?
-
New 'The Sculpture 3'. 3d sculpting + neural network
no i dont.. but as for training - i just use default tf stylegan2-ada repo ( https://github.com/NVlabs/stylegan2-ada )
-
[R] EigenGAN: Layer-Wise Eigen-Learning for GANs
You should check stylegan-2 ada, it works on colab and can be trained less than 12 hours tensorflow implementation
- gamma
What are some alternatives?
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
stylegan2 - StyleGAN2 - Official TensorFlow Implementation
stylegan2_pytorch - A Pytorch implementation of StyleGAN2
dl-colab-notebooks - Try out deep learning models online on Google Colab
clip-guided-diffusion - A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.
stylegan - StyleGAN - Official TensorFlow Implementation
ml-art-colabs - A list of Machine Learning Art Colabs
LiminalGan - A stylegan2 model trained on liminal space images
Awesome-Text-to-Image - (ෆ`꒳´ෆ) A Survey on Text-to-Image Generation/Synthesis.
GAN_stability - Code for paper "Which Training Methods for GANs do actually Converge? (ICML 2018)"