mixture-of-experts
routing-transformer
mixture-of-experts | routing-transformer | |
---|---|---|
1 | 1 | |
525 | 263 | |
- | - | |
4.1 | 0.0 | |
8 months ago | over 2 years ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mixture-of-experts
-
How to Go beyond Data Parallelism and Model Parallelism: Talking from GShard[R]
Code for https://arxiv.org/abs/2006.16668 found: https://github.com/lucidrains/mixture-of-experts
routing-transformer
-
Will Transformers Take over Artificial Intelligence?
I would recommend Routing Transformer https://github.com/lucidrains/routing-transformer but the real truth is nothing beats full attention. Luckily, someone recently figured out how to get past the memory bottleneck. https://github.com/lucidrains/memory-efficient-attention-pyt...
What are some alternatives?
uformer-pytorch - Implementation of Uformer, Attention-based Unet, in Pytorch
tab-transformer-pytorch - Implementation of TabTransformer, attention network for tabular data, in Pytorch
conformer - Implementation of the convolutional module from the Conformer paper, for use in Transformers
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
enformer-pytorch - Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch
memory-efficient-attention-pyt
vit-pytorch - Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Compact-Transformers - Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)
LOGICGUIDE - Plug in and Play implementation of "Certified Reasoning with Language Models" that elevates model reasoning by 40%