Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Gansformer Alternatives
Similar projects and alternatives to gansformer
-
-
SteganoGAN
SteganoGAN is a tool for creating steganographic images using adversarial training.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
[ECCV 2022] Compositional Generation using Diffusion Models
-
data-efficient-gans
[NeurIPS 2020] Differentiable Augmentation for Data-Efficient GAN Training
-
icl-ceil
[ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.
-
long-range-arena
Long Range Arena for Benchmarking Efficient Transformers
-
gnn-lspe
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
pytorch-CycleGAN-and-pix2pix
Image-to-Image Translation in PyTorch
gansformer reviews and mentions
-
[D] GANs + Transformer = SOTA compositional generator? Compositional Transformers for Scene Generation explained (5-minute summary by Casual GAN Papers)
arxiv / code
Code for https://arxiv.org/abs/2111.08960 found: https://github.com/dorarad/gansformer
-
[R] Generative Adversarial Transformers (2103.01209)
Abstract: We introduce the GANsformer, a novel and efficient type of transformer, and explore it for the task of visual generative modeling. The network employs a bipartite structure that enables long-range interactions across the image, while maintaining computation of linearly efficiency, that can readily scale to high-resolution synthesis. It iteratively propagates information from a set of latent variables to the evolving visual features and vice versa, to support the refinement of each in light of the other and encourage the emergence of compositional representations of objects and scenes. In contrast to the classic transformer architecture, it utilizes multiplicative integration that allows flexible region-based modulation, and can thus be seen as a generalization of the successful StyleGAN network. We demonstrate the model's strength and robustness through a careful evaluation over a range of datasets, from simulated multi-object environments to rich real-world indoor and outdoor scenes, showing it achieves state-of-the-art results in terms of image quality and diversity, while enjoying fast learning and better data-efficiency. Further qualitative and quantitative experiments offer us an insight into the model's inner workings, revealing improved interpretability and stronger disentanglement, and illustrating the benefits and efficacy of our approach. An implementation of the model is available at https://github.com/dorarad/gansformer.
https://github.com/dorarad/gansformer/blob/148f72964219f8ead2621204bc5cfa89200b6879/training/network.py#L461
-
A note from our sponsor - InfluxDB
www.influxdata.com | 28 Mar 2024
Stats
dorarad/gansformer is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of gansformer is Python.