routing-transformer
tab-transformer-pytorch

routing-transformer | tab-transformer-pytorch | |
---|---|---|
1 | 1 | |
288 | 818 | |
1.4% | - | |
0.0 | 4.5 | |
over 3 years ago | about 1 year ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
routing-transformer
-
Will Transformers Take over Artificial Intelligence?
I would recommend Routing Transformer https://github.com/lucidrains/routing-transformer but the real truth is nothing beats full attention. Luckily, someone recently figured out how to get past the memory bottleneck. https://github.com/lucidrains/memory-efficient-attention-pyt...
tab-transformer-pytorch
-
[P] pytorch-widedeep v1.0.9: the Perceiver and the FastFormer for tabular data are now available in the library
Code for https://arxiv.org/abs/2012.06678 found: https://github.com/lucidrains/tab-transformer-pytorch
What are some alternatives?
memory-efficient-attention-pyt
tabnet - PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf
conformer - Implementation of the convolutional module from the Conformer paper, for use in Transformers
rtdl - Research on Tabular Deep Learning (Python package & papers) [Moved to: https://github.com/Yura52/rtdl]
vit-pytorch - Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
HTM-pytorch - Implementation of Hierarchical Transformer Memory (HTM) for Pytorch
Compact-Transformers - Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
Multimodal-Toolkit - Multimodal model for text and tabular data with HuggingFace transformers as building block for text data
enformer-pytorch - Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch
perceiver-pytorch - Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch
