LFattNet VS performer-pytorch

Compare LFattNet vs performer-pytorch and see what are their differences.

LFattNet

Attention-based View Selection Networks for Light-field Disparity Estimation (by LIAGM)

performer-pytorch

An implementation of Performer, a linear attention-based transformer, in Pytorch (by lucidrains)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
LFattNet performer-pytorch
1 2
53 1,055
- -
0.0 1.8
over 3 years ago about 2 years ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

LFattNet

Posts with mentions or reviews of LFattNet. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-05-17.

performer-pytorch

Posts with mentions or reviews of performer-pytorch. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-04-21.

What are some alternatives?

When comparing LFattNet and performer-pytorch you can also consider the following projects:

attention-is-all-you-need-pytorch - A PyTorch implementation of the Transformer model in "Attention is All You Need".

long-range-arena - Long Range Arena for Benchmarking Efficient Transformers

fashion-mnist - A MNIST-like fashion product database. Benchmark :point_down:

Perceiver - Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow

DenseDepth - High Quality Monocular Depth Estimation via Transfer Learning

memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

Meta-SelfLearning - Meta Self-learning for Multi-Source Domain Adaptation: A Benchmark

reformer-pytorch - Reformer, the efficient Transformer, in Pytorch

vit-pytorch - Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

scenic - Scenic: A Jax Library for Computer Vision Research and Beyond

deep-implicit-attention - Implementation of deep implicit attention in PyTorch