EfficientFormer
EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022] (by snap-research)
Efficient-AI-Backbones
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab. (by huawei-noah)
EfficientFormer | Efficient-AI-Backbones | |
---|---|---|
2 | 3 | |
944 | 3,816 | |
0.7% | 1.5% | |
3.3 | 5.8 | |
9 months ago | 9 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | - |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
EfficientFormer
Posts with mentions or reviews of EfficientFormer.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-09-16.
-
A look at Apple’s new Transformer-powered predictive text model
I'm pretty fatigued on constantly providing references and sources in this thread but an example of what they've made availably publicly:
https://github.com/snap-research/EfficientFormer
-
Snap and Northeastern University Researchers Propose EfficientFormer: A Vision Transformer That Runs As Fast As MobileNet While Maintaining High Performance
Continue reading | Check out the paper, github
Efficient-AI-Backbones
Posts with mentions or reviews of Efficient-AI-Backbones.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Researchers From China Introduce Vision GNN (ViG): A Graph Neural Network For Computer Vision Systems
Continue reading | Check out the paper, github
- GNN for computer vision, beating CNN & Transformer
-
GNN can also work well on computer vision
Vision GNN: An Image is Worth Graph of Nodes Network architecture plays a key role in the deep learning-based computer vision system. The widely-used convolutional neural network and transformer treat the image as a grid or sequence structure, which is not flexible to capture irregular and complex objects. In this paper, we propose to represent the image as a graph structure and introduce a new Vision GNN (ViG) architecture to extract graph-level feature for visual tasks. We first split the image to a number of patches which are viewed as nodes, and construct a graph by connecting the nearest neighbors. Based on the graph representation of images, we build our ViG model to transform and exchange information among all the nodes. ViG consists of two basic modules: Grapher module with graph convolution for aggregating and updating graph information, and FFN module with two linear layers for node feature transformation. Both isotropic and pyramid architectures of ViG are built with different model sizes. Extensive experiments on image recognition and object detection tasks demonstrate the superiority of our ViG architecture. We hope this pioneering study of GNN on general visual tasks will provide useful inspiration and experience for future research. The PyTroch code will be available at https://github.com/huawei-noah/CV-Backbones.
What are some alternatives?
When comparing EfficientFormer and Efficient-AI-Backbones you can also consider the following projects:
PyTorch-Model-Compare - Compare neural networks by their feature similarity
MPViT - [CVPR 2022] MPViT:Multi-Path Vision Transformer for Dense Prediction
dytox - Dynamic Token Expansion with Continual Transformers, accepted at CVPR 2022
FQ-ViT - [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
predictive-spy - Spying on Apple’s new predictive text model
transfiner - Mask Transfiner for High-Quality Instance Segmentation, CVPR 2022