graphtransformer
performer-pytorch
graphtransformer | performer-pytorch | |
---|---|---|
4 | 2 | |
804 | 1,055 | |
3.5% | - | |
0.0 | 1.8 | |
almost 3 years ago | over 2 years ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
graphtransformer
-
[D] Laplacian positional encodings
Code for https://arxiv.org/abs/2012.09699 found: https://github.com/graphdeeplearning/graphtransformer
-
Resources for graphs and graph based neural networks.
Hi everyone. I am Harshit. I am looking for some resources for GNN's. I am beginner so will you please help me find some resources for the same? I am working on Transformer based Graph networks. (Here)[https://github.com/graphdeeplearning/graphtransformer] is the link of the paper's implementation on which I am working.
performer-pytorch
-
[R] Rotary Positional Embeddings - a new relative positional embedding for Transformers that significantly improves convergence (20-30%) and works for both regular and efficient attention
Performer is the best linear attention variant, but linear attention is just one type of efficient attention solution. I have rotary embeddings already in the repo https://github.com/lucidrains/performer-pytorch and you can witness this phenomenon yourself by toggling it on / off
-
Why has Google's Performer model not replaced traditional softmax attention?
Here's an PyTorch implementation if you want to play around with it: lucidrains/performer-pytorch: An implementation of Performer, a linear attention-based transformer, in Pytorch (github.com)
What are some alternatives?
spektral - Graph Neural Networks with Keras and Tensorflow 2.
long-range-arena - Long Range Arena for Benchmarking Efficient Transformers
gnn-lspe - Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
Perceiver - Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow
GraphGPS - Recipe for a General, Powerful, Scalable Graph Transformer
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
LFattNet - Attention-based View Selection Networks for Light-field Disparity Estimation
reformer-pytorch - Reformer, the efficient Transformer, in Pytorch
vit-pytorch - Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
deep-implicit-attention - Implementation of deep implicit attention in PyTorch
scenic - Scenic: A Jax Library for Computer Vision Research and Beyond
TimeSformer-pytorch - Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification