data-efficient-gans
stylegan2-ada-pytorch
data-efficient-gans | stylegan2-ada-pytorch | |
---|---|---|
9 | 30 | |
1,258 | 3,917 | |
0.2% | 0.9% | |
0.0 | 2.3 | |
6 months ago | 4 months ago | |
Python | Python | |
BSD 2-clause "Simplified" License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
data-efficient-gans
-
[D] Has anyone tried GAN "tricks" on VAEs?
Code for https://arxiv.org/abs/2006.10738 found: https://github.com/mit-han-lab/data-efficient-gans
-
What StyleGan model to use for a custom dataset of small size?
I would like to make a tiny project with GANs using some high quality pictures of a single individual. I am planning to get around 500 of these and then x-flip them, however I am not sure what model I should consider for the training. I have used StyleGan2 ADA for another project which ended quite well, but I had around 14k pictures, here now the training size is much smaller and I was therefore thinking about using DiffAugment which has seemingly promising results with just 100 images.
-
This Bot Crime Did Not Occur
I used a modified version of this repo, and there's also the official NVIDIA implementation, though neither have official notebooks. You can Google 'StyleGAN2 ADA Colab' and find a few starting points that way, but wait a few hours and I can clean up my notebook and post it here!
-
[P] Differentiable augmentation for GANs - Implementation and explanation
Paper: https://arxiv.org/abs/2006.10738
-
Deepspeed x Stylegan?
There are some repos which I've looked at to add deepspeed to such as DiffAugment-stylegan2-pytorch, lucidrains/stylegan2-pytorch and eps696/stylegan2 (which is in tensorflow so it would need to be translated to pytorch as deepspeed only works with pytorch right now).
-
Model takes seconds to train per epoch with 1 accuracy
Here is the paper using GANs with few data points https://arxiv.org/abs/2006.10738
-
Looking for resources regarding GANs trained on my own stuff.
Hey, for image gans, you can use smooth data aumentation https://github.com/mit-han-lab/data-efficient-gans in case you have a reasonable sized dataset.
stylegan2-ada-pytorch
-
Samsung expected to report 80% profit plunge as losses mount at chip business
> there is really nothing that "normal" AI requires that is bound to CUDA. pyTorch and Tensorflow are backend agnostic (ideally...).
There are a lot of optimizations that CUDA has that are nowhere near supported in other software or even hardware. Custom cuda kernels also aren't as rare as one might think, they will often just be hidden unless you're looking at libraries. Our more well known example is going to be StyleGAN[0] but it isn't uncommon to see elsewhere, even in research code. Swin even has a cuda kernel[1]. Or find torch here[1] (which github reports that 4% of the code is cuda (and 42% C++ and 2% C)). These things are everywhere. I don't think pytorch and tensorflow could ever be agnostic, there will always be a difference just because you have to spend resources differently (developing kernels is time resource). We can draw evidence by looking at Intel MKL, which is still better than open source libraries and has been so for a long time.
I really do want AMD to compete in this space. I'd even love a third player like Intel. We really do need competition here, but it would be naive to think that there's going to be a quick catchup here. AMD has a lot of work to do and posting a few bounties and starting a company (idk, called "micro grad"?) isn't going to solve the problem anytime soon.
And fwiw, I'm willing to bet that most AI companies would rather run in house servers than from cloud service providers. The truth is that right now just publishing is extremely correlated to compute infrastructure (doesn't need to be but with all the noise we've just said "fuck the poor" because rejecting is easy) and anyone building products has costly infrastructure.
[0] https://github.com/NVlabs/stylegan2-ada-pytorch/blob/d72cc7d...
[1] https://github.com/microsoft/Swin-Transformer/blob/2cb103f2d...
[2] https://github.com/pytorch/pytorch/tree/main/aten/src
-
[R] StyleGAN2-ADA on Power 9?!
I am talking about the original Nvidia implementation here: https://github.com/NVlabs/stylegan2-ada-pytorch
-
This X Does Not Exist
I think you should be able to find a latent vector that returns a cat that is part of the original training data (or at least very close to it). Most of the outputs will not be real cats at all though. However, it's pretty simple to try and find the latent vector that reproduces a given image, e.g. https://github.com/NVlabs/stylegan2-ada-pytorch/blob/main/pr...
-
[P] Frechet Inception Distance
One irritating flaw with FID is that scores are massively biased by the number of samples, that is, the fewer samples you use, the larger the score. So to make comparisons fair it's absolutely crucial to use the same number of samples. From what I've seen on standard benchmarks it's pretty common now to compute Inception features for every single data point, but only for 50k samples from generative models (for reference off the top of my head StyleGAN2-ADA does this, see Appendix A).
-
generating images
You can follow the development of stylegan from NVIDIA: https://github.com/NVlabs/stylegan2-ada-pytorch They have formed datasets containing human faces, maybe you can use human faces with expressions as classes and train conditional GAN with your own classes.
-
What is the best GAN architecture for image data augmentation?
Given the lack of data StyleGan 2 by Nvidia, which was specifically created to handle small datasets could be an option - https://github.com/NVlabs/stylegan2-ada-pytorch
-
City Does Not Exist
First, you have to collect a few thousand images of the same thing (maybe more or less depending on how complex your thing is or how good the results should be). Then, you train a generative adversarial neural network on those images to generate new images. https://github.com/NVlabs/stylegan2-ada-pytorch works quite well. https://github.com/NVlabs/stylegan3 is supposedly even better, but I did not try it yet.
- Modern Propaganda (this person does not exist)
-
From 53% to 95% acc - Real vs Fake Faces Classification | Fine-tuning EfficientNet (Github in comment)
What NVIDIA does when computing Perceptual Path Length is to center crop the faces before computing the metric. Here you can find the code to get an idea https://github.com/NVlabs/stylegan2-ada-pytorch/blob/main/metrics/perceptual_path_length.py
-
StyleGAN2 ADA Pytorch ends after tick 0 with no errors.
I\m trying to train StyleGAN2 ADA Pytorch https://github.com/NVlabs/stylegan2-ada-pytorch on my own dataset.
What are some alternatives?
stable-diffusion-docker - Run the official Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint.
stylegan3 - Official PyTorch implementation of StyleGAN3
Fast-SRGAN - A Fast Deep Learning Model to Upsample Low Resolution Videos to High Resolution at 30fps
pixel2style2pixel - Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
SDEdit - PyTorch implementation for SDEdit: Image Synthesis and Editing with Stochastic Differential Equations
BigGAN-PyTorch - The author's officially unofficial PyTorch BigGAN implementation.
gansformer - Generative Adversarial Transformers
StyleFlow - StyleFlow: Attribute-conditioned Exploration of StyleGAN-generated Images using Conditional Continuous Normalizing Flows (ACM TOG 2021)
generative_inpainting - DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral
lucid-sonic-dreams
cartoonize - A demo webapp to convert images and videos into cartoon!
spleeter - Deezer source separation library including pretrained models.