gansformer
long-range-arena
Our great sponsors
gansformer | long-range-arena | |
---|---|---|
7 | 6 | |
1,302 | 682 | |
- | 2.9% | |
1.8 | 0.0 | |
almost 2 years ago | 4 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gansformer
-
[D] GANs + Transformer = SOTA compositional generator? Compositional Transformers for Scene Generation explained (5-minute summary by Casual GAN Papers)
Code for https://arxiv.org/abs/2111.08960 found: https://github.com/dorarad/gansformer
-
Generative Adversarial Transformers [R]
As for whether the Ys are shared across layers, check the code.
-
[Project] These players does not exist
I tested the gansformer (https://github.com/dorarad/gansformer) to generate football player faces. Here are some selected results (actually some of the images are real players):
-
GANsformers: Scene Generation with Generative Adversarial Transformers š„
References: Paperāŗ: https://arxiv.org/pdf/2103.01209.pdf Codeāŗ: https://github.com/dorarad/gansformer Complete referenceāŗ: Drew A. Hudson and C. Lawrence Zitnick, Generative Adversarial Transformers, (2021), Published on Arxiv.
-
[R] Generative Adversarial Transformers (2103.01209)
https://github.com/dorarad/gansformer/blob/148f72964219f8ead2621204bc5cfa89200b6879/training/network.py#L461
long-range-arena
-
The Secret Sauce behind 100K context window in LLMs: all tricks in one place
https://github.com/google-research/long-range-arena
-
[R] The Annotated S4: Efficiently Modeling Long Sequences with Structured State Spaces
The Structured State Space for Sequence Modeling (S4) architecture is a new approach to very long-range sequence modeling tasks for vision, language, and audio, showing a capacity to capture dependencies over tens of thousands of steps. Especially impressive are the modelās results on the challenging Long Range Arena benchmark, showing an ability to reason over sequences of up to 16,000+ elements with high accuracy.
-
[D] Is there a repo on which many light-weight self-attention mechanism are introduced?
1.1 Long Range Arena: A Benchmark for Efficient Transformers. From authors of above, they proposed a benchmark for modeling long range interactions. It also inlcudes a repository
- [R] Googleās H-Transformer-1D: Fast One-Dimensional Hierarchical Attention With Linear Complexity for Long Sequence Processing
- [2107.11906] H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences
-
[R][D] Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. Zhou et al. AAAI21 Best Paper. ProbSparse self-attention reduces complexity to O(nlogn), generative style decoder to obtainsequence output in one step, and self-attention distilling for further reducing memory
I think the paper is written in a clear style and I like that the authors included many experiments, including hyperparameter effects, ablations and extensive baseline comparisons. One thing I would have liked is them comparing their Informer to more efficient transformers (they compared only against logtrans and reformer) using the LRA (https://github.com/google-research/long-range-arena) benchmark.
What are some alternatives?
pytorch-generative - Easy generative modeling in PyTorch.
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
SteganoGAN - SteganoGAN is a tool for creating steganographic images using adversarial training.
attention-is-all-you-need-pytorch - A PyTorch implementation of the Transformer model in "Attention is All You Need".
Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch - [ECCV 2022] Compositional Generation using Diffusion Models
HJxB - Continuous-Time/State/Action Fitted Value Iteration via Hamilton-Jacobi-Bellman (HJB)
data-efficient-gans - [NeurIPS 2020] Differentiable Augmentation for Data-Efficient GAN Training
jax-resnet - Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).
gnn-lspe - Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
tldr-transformers - The "tl;dr" on a few notable transformer papers (pre-2022).
icl-ceil - [ICML 2023] Code for our paper āCompositional Exemplars for In-context Learningā.
elegy - A High Level API for Deep Learning in JAX