big-sleep-examples
Examples code and images for programmatically generating images using Big Sleep (CLIP + BigGAN). (by thehappydinoa)
PyTorch-StudioGAN
StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation. (by POSTECH-CVLab)
big-sleep-examples | PyTorch-StudioGAN | |
---|---|---|
1 | 9 | |
1 | 3,369 | |
- | 0.1% | |
1.8 | 6.1 | |
over 2 years ago | 9 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
big-sleep-examples
Posts with mentions or reviews of big-sleep-examples.
We have used some of these posts to build our list of alternatives
and similar projects.
PyTorch-StudioGAN
Posts with mentions or reviews of PyTorch-StudioGAN.
We have used some of these posts to build our list of alternatives
and similar projects.
-
[R] GigaGAN: A Large-scale Modified GAN Architecture for Text-to-Image Synthesis. Better FID Score than Stable Diffusion v1.5, DALLĀ·E 2, and Parti-750M. Generates 512px outputs at 0.13s. Native Prompt mixing, Prompt Interpolation and Style Mixing. A GigaGAN Upscaler is also introduced (Up to 4K)
Given the first author I'd expect it to land in StudioGAN sometime in the future. Training it from scratch will definitely be costly though.
-
[P] Implementations of 30 representative GANs and Comprehensive Benchmark for GAN, AR, and Diffusion Models (link in comments).
Github Link: https://github.com/POSTECH-CVLab/PyTorch-StudioGAN
Github Link: https://github.com/POSTECH-CVLab/PyTorch-StudioGAN Paper Link: https://arxiv.org/abs/2206.09479 I would like to introduce PyTorch-StudioGAN library, which I have been maintaining for the past two years. StudioGAN is a PyTorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation. StudioGAN aims to offer an identical playground for modern GANs so that machine learning researchers can readily compare and analyze a new idea. Moreover, StudioGAN provides an unprecedented-scale benchmark for generative models. The benchmark includes results from GANs (BigGAN-Deep, StyleGAN-XL), auto-regressive models (MaskGIT, RQ-Transformer), and Diffusion models (LSGM++, CLD-SGM, ADM-G-U). [Features] * Coverage: StudioGAN is a self-contained library that provides 7 GAN architectures, 9 conditioning methods, 4 adversarial losses, 13 regularization modules, 6 augmentation modules, 8 evaluation metrics, and 5 evaluation backbones. Among these configurations, we formulate 30 GANs as representatives. * Flexibility: Each modularized option is managed through a configuration system that works through a YAML file, so users can train a large combination of GANs by mix-matching distinct options. * Reproducibility: With StudioGAN, users can compare and debug various GANs with the unified computing environment without concerning about hidden details and tricks. * Plentifulness: StudioGAN provides a large collection of pre-trained GAN models, training logs, and evaluation results. * Versatility: StudioGAN supports 5 types of acceleration methods with synchronized batch normalization for training: a single GPU training, data-parallel training (DP), distributed data-parallel training (DDP), multi-node distributed data-parallel training (MDDP), and mixed-precision training.
- [P], [R] Implementations of 30 Representative GANs and Comprehensive Benchmark for GAN, AR, and Diffusion Models (link in comments).
- [P] Implementations of 37 GAN-related papers using PyTorch including BigGAN and StyleGAN2-ADA (link in comment)
- [P] 40 Implementations of GAN-related papers including BigGAN and StyleGAN2 in a unified training pipeline
-
[R] Rebooting ACGAN: A new GAN that achieves SOTA results and harmonizes with various architectures, adversarial losses, and even differentiable augmentations (Neurips 2021).
Code for https://arxiv.org/abs/2111.01118 found: https://github.com/POSTECH-CVLab/PyTorch-StudioGAN
-
[N] LAMA AI's weekly news, updates, and events.
StudioGAN is introduced: A PyTorch library for SoTA GAN models
-
PyTorch GAN Library that provides implementations of 18+ SOTA GANs with pretrained_model, configs, logs, and checkpoints (link in comments)
Github: https://github.com/POSTECH-CVLab/PyTorch-StudioGAN
What are some alternatives?
When comparing big-sleep-examples and PyTorch-StudioGAN you can also consider the following projects:
MaixPy-v1_scripts - micropython scripts for MaixPy
awesome-colab-notebooks - Collection of google colaboratory notebooks for fast and easy experiments
BigGAN-PyTorch - The author's officially unofficial PyTorch BigGAN implementation.
stylegan2-pytorch - Implementation of Analyzing and Improving the Image Quality of StyleGAN (StyleGAN 2) in PyTorch
stylegan3-editing - Official Implementation of "Third Time's the Charm? Image and Video Editing with StyleGAN3" (AIM ECCVW 2022) https://arxiv.org/abs/2201.13433
pix2pixHD - Synthesizing and manipulating 2048x1024 images with conditional GANs
anycost-gan - [CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing