stylegan
stylegan2
Our great sponsors
stylegan | stylegan2 | |
---|---|---|
31 | 40 | |
13,933 | 10,753 | |
0.5% | 0.2% | |
0.0 | 0.0 | |
16 days ago | about 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stylegan
-
An AI artist isn't an artist
Been following generative AI since 2017 when nvidia released their first GAN paper & the results always fascinated me. Trained my own models with their repo then experimented with other open source projects. went thru the pain of assembling my own data set, tweaking code parameters to achieve what i'm looking for, had to deal with all kinds of hardware/software issues. I know it's not easy. (screenshot of a motorbike GAN model i was training in 2018 https://imgur.com/a/SIULFhR, was taken after 5 hours of training on a gtx 1080) or this, cinema camera output from another locally trained model. So yeah i have a couple ideas of how generative AI works. yup things were that bad few years ago, that technology has come a long way. Using & setting up something like stable diffusion with automatic1111 webui isn't really a complex process. Though generating AI art locally is always gonna feel more rewarding than using a cloud based service.
-
Clearview AI scraped 30 billion images from Facebook and gave them to cops: it puts everyone into a 'perpetual police line-up'
Their algorithm is public, you could do it yourself if you have the proper hardware: https://github.com/NVlabs/stylegan
-
StyleGAN-T Nvidia, 30x Faster than SD?
Umm, StyleGAN was the first decent image generation model, and it was producing great images from random seeds 5 years ago. Now, that's with the obvious caveat that each model was trained to produce one specific type of image and it helped immensely if the training images were all aligned the same. Diffusion models are certainly the trendy current architecture for image generation, but AFAIK there's no fundamental theoretical limitation to the output quality of any architecture except the general rule that more parameters is better.
- The Concept Art Association updates their AI-restricting gofundme campaign, revealing their lack of AI understanding & nefarious plans! [detailed breakdown]
- This was taken outdoors with no special lighting
-
What the F**k
Jokes aside, ML moves extremely fast and our field is quickly advancing. The honest truth is that no researcher can even keep up other than their extremely niche corner. I'll show you an example. Here's what state of the art image generation looked like in 2014, 2018, and here is today (which now is highly controllable using text prompts instead of data prompts).
- Garfield
-
Teaching AI to Generate New Pokemon
The fundamental technology we will use in this work is a generative adversarial network. Specifically, the Style GAN variant.
-
A100 vs A6000 vs 3090 for computer vision and FP32/FP64
Based on my findings, we don't really need FP64 unless it's for certain medical applications. But The Best GPUs for Deep Learning in 2020 — An In-depth Analysis is suggesting A100 outperforms A6000 ~50% in DL. Also the Stylegan project GitHub - NVlabs/stylegan: StyleGAN - Official TensorFlow Implementation uses NVIDIA DGX-1 with 8 Tesla V100 16G(Fp32=15TFLOPS) to train dataset of high-res 1024*1024 images, I'm getting a bit uncertain if my specific tasks would require FP64 since my dataset is also high-res images. If not, can I assume A6000*5(total 120G) could provide similar results for StyleGan?
-
[D] Which gpu should I choose?
Yes that's what I thought. But StyleGan https://github.com/NVlabs/stylegan uses NVIDIA DGX-1 with 8 Tesla V100 16G GPUs(FP32=15) to do the training, not sure if it's related to its high-res training images or something else.
stylegan2
-
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
I don't know. If you're really curious, you can just try it: https://github.com/NVlabs/stylegan2
-
Used thispersondoesnotexist.com, then expanded it with DALL-E
StyleGAN2 (Dec 2019) - Karras et al. and Nvidia
-
Show HN: Food Does Not Exist
> The denoising part of a denoising autoencoder refers to the noise applied to its input
Agree, it converts a noisy image to a denoised image. But the odd thing is, when you put a noisy image into a StyleGAN2 encoder, you get latents which the decoder will turn into a de-noised image. So in practical use, you can take a trained StyleGAN2 encoder/decoder pair and use it as if it was a denoiser.
> These differences lead to learned distributions in the latent space that are entirely different
I also agree there. The training for a denoising auto-encoder and for a GAN network is different, leading to different distributions which are sampled for generating the images. But the architecture is still very similar, meaning the limits of what can be learned should be the same.
> Beyond that the comparison just doesn't work, yes there are two networks but the discriminator doesn't play the role of the AE's encoder at all
Yes, the discriminator in a GAN won't work like an encoder. But if you look at how StyleGAN 1/2 are used in practice, people combine it with a so-called "projection", which is effectively an encoder to convert images to latents. So people use a pipeline of "image to latent encoder" + "latent to image decoder".
That whole pipeline is very similar to an auto-encoder. For example, here's an NVIDIA paper about how they round-trip from image to latent to image with StyleGAN: https://arxiv.org/abs/1912.04958 My interpretation of what they did in that paper is that they effectively trained a StyleGAN-like model with the image L2 loss typically used for training a denoising auto-encoder.
- "Why yes I totally believe the 'Xinjiang Police Files', they got photos of REAL (100% not AI generated) detainees!"
-
How did they code Viola AI (face to cartoon)
These problems are usually done with CNN Encoder-Decoder frameworks. Usually GAN (Generative Adversarial Networks see StyleGan2).
-
AI morphs many faces together to all sing Scatman
This is the result of two different models. The first looks like a latent space interpolation of StyleGan2 and the mouth movements are without a doubt from wav2lip.
-
What A.I. tool is this?
OP: if you want to run this at higher resolution, you should probably look at running it yourself, using something like this: https://github.com/NVlabs/stylegan2
-
Imagined ML model deployment on normal machine, is it possible?
StyleGAN2 (Dec 2019) - Karras et al. and Nvidia
-
I'm implementing StyleGAN2 with Keras. I was worried it wasn't working, but after some 300K training steps it's finally starting to converge. (+ plot of what the first (4x4) part looks like)
A few of you might've seen an earlier post of mine about this project (Or the repost that got more upvotes 🙃), and I've improved the code and network since then after more thoroughly reading and understanding the official StyleGAN2 implementation.
-
Is it just me or has Google Colab Pro become a lot more restrictive lately?
So I've been a Pro+ subscriber since around November which I mainly use to train GANs. I have multiple Google accounts, let's call them Account 1, 2, and 3. Accounts 1 and 2 are normal Google accounts and Account 3 is an account I got from my university after I graduated which has unlimited storage.
What are some alternatives?
pix2pix - Image-to-image translation with conditional adversarial nets
Wav2Lip - This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
lucid-sonic-dreams
DeOldify - A Deep Learning based project for colorizing and restoring old images (and video!)
stylegan2-ada - StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation
aphantasia - CLIP + FFT/DWT/RGB = text to image/video
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
ffhq-dataset - Flickr-Faces-HQ Dataset (FFHQ)
lightweight-gan - Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two
awesome-pretrained-stylegan2 - A collection of pre-trained StyleGAN 2 models to download