gansformer
pytorch-generative
Our great sponsors
gansformer | pytorch-generative | |
---|---|---|
7 | 1 | |
1,302 | 400 | |
- | - | |
1.8 | 3.4 | |
almost 2 years ago | 8 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gansformer
-
[D] GANs + Transformer = SOTA compositional generator? Compositional Transformers for Scene Generation explained (5-minute summary by Casual GAN Papers)
Code for https://arxiv.org/abs/2111.08960 found: https://github.com/dorarad/gansformer
-
Generative Adversarial Transformers [R]
As for whether the Ys are shared across layers, check the code.
-
[Project] These players does not exist
I tested the gansformer (https://github.com/dorarad/gansformer) to generate football player faces. Here are some selected results (actually some of the images are real players):
-
GANsformers: Scene Generation with Generative Adversarial Transformers š„
References: Paperāŗ: https://arxiv.org/pdf/2103.01209.pdf Codeāŗ: https://github.com/dorarad/gansformer Complete referenceāŗ: Drew A. Hudson and C. Lawrence Zitnick, Generative Adversarial Transformers, (2021), Published on Arxiv.
-
[R] Generative Adversarial Transformers (2103.01209)
https://github.com/dorarad/gansformer/blob/148f72964219f8ead2621204bc5cfa89200b6879/training/network.py#L461
pytorch-generative
What are some alternatives?
SteganoGAN - SteganoGAN is a tool for creating steganographic images using adversarial training.
animegan2-pytorch - PyTorch implementation of AnimeGANv2
Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch - [ECCV 2022] Compositional Generation using Diffusion Models
Basic-UI-for-GPT-J-6B-with-low-vram - A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loading takes 12gb free ram.
data-efficient-gans - [NeurIPS 2020] Differentiable Augmentation for Data-Efficient GAN Training
vq-vae-2-pytorch - Implementation of Generating Diverse High-Fidelity Images with VQ-VAE-2 in PyTorch
long-range-arena - Long Range Arena for Benchmarking Efficient Transformers
score_sde - Official code for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)
gnn-lspe - Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
naver-webtoon-faces - Generative models on NAVER Webtoon faces
icl-ceil - [ICML 2023] Code for our paper āCompositional Exemplars for In-context Learningā.
smaller-transformers - Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.