YOLO_Object_Detection
data-efficient-gans
YOLO_Object_Detection | data-efficient-gans | |
---|---|---|
2 | 9 | |
1,709 | 1,261 | |
- | 0.4% | |
0.0 | 0.0 | |
over 3 years ago | 6 months ago | |
Python | Python | |
GNU General Public License v3.0 only | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
YOLO_Object_Detection
-
Model takes seconds to train per epoch with 1 accuracy
But using GANs for a newperson is maybe asking too much. See if you can find weights for yolo and just finetune it https://github.com/llSourcell/YOLO_Object_Detection
-
Plagiarism is just bad
is a blatant lie. He didn't modify anything but the README as far as I am aware. He uses the exact same sentence in many of his cloned repos, see also this and this. But that's beside the point because the right thing to do would to fork a repository if you just want to make minor changes, or star it and share the original repo link on his YouTube channel if all he does is doing quick walkthroughs through the source code with arguably bad explanations.
data-efficient-gans
-
[D] Has anyone tried GAN "tricks" on VAEs?
Code for https://arxiv.org/abs/2006.10738 found: https://github.com/mit-han-lab/data-efficient-gans
-
What StyleGan model to use for a custom dataset of small size?
I would like to make a tiny project with GANs using some high quality pictures of a single individual. I am planning to get around 500 of these and then x-flip them, however I am not sure what model I should consider for the training. I have used StyleGan2 ADA for another project which ended quite well, but I had around 14k pictures, here now the training size is much smaller and I was therefore thinking about using DiffAugment which has seemingly promising results with just 100 images.
-
This Bot Crime Did Not Occur
I used a modified version of this repo, and there's also the official NVIDIA implementation, though neither have official notebooks. You can Google 'StyleGAN2 ADA Colab' and find a few starting points that way, but wait a few hours and I can clean up my notebook and post it here!
-
[P] Differentiable augmentation for GANs - Implementation and explanation
Paper: https://arxiv.org/abs/2006.10738
-
Deepspeed x Stylegan?
There are some repos which I've looked at to add deepspeed to such as DiffAugment-stylegan2-pytorch, lucidrains/stylegan2-pytorch and eps696/stylegan2 (which is in tensorflow so it would need to be translated to pytorch as deepspeed only works with pytorch right now).
-
Model takes seconds to train per epoch with 1 accuracy
Here is the paper using GANs with few data points https://arxiv.org/abs/2006.10738
-
Looking for resources regarding GANs trained on my own stuff.
Hey, for image gans, you can use smooth data aumentation https://github.com/mit-han-lab/data-efficient-gans in case you have a reasonable sized dataset.
What are some alternatives?
TensorFlow-Tutorials - TensorFlow Tutorials with YouTube Videos
stylegan2-ada-pytorch - StyleGAN2-ADA - Official PyTorch implementation
stable-diffusion-docker - Run the official Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint.
Fast-SRGAN - A Fast Deep Learning Model to Upsample Low Resolution Videos to High Resolution at 30fps
SDEdit - PyTorch implementation for SDEdit: Image Synthesis and Editing with Stochastic Differential Equations
gansformer - Generative Adversarial Transformers
generative_inpainting - DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral
cartoonize - A demo webapp to convert images and videos into cartoon!
DCGAN-LSGAN-WGAN-GP-DRAGAN-Tensorflow-2
ESRGAN - ECCV18 Workshops - Enhanced SRGAN. Champion PIRM Challenge on Perceptual Super-Resolution. The training codes are in BasicSR.