dytox
Dynamic Token Expansion with Continual Transformers, accepted at CVPR 2022 (by arthurdouillard)
EfficientFormer
EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022] (by snap-research)
dytox | EfficientFormer | |
---|---|---|
1 | 2 | |
132 | 947 | |
- | 1.1% | |
1.8 | 3.3 | |
almost 2 years ago | 9 months ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dytox
Posts with mentions or reviews of dytox.
We have used some of these posts to build our list of alternatives
and similar projects.
-
[D] Using special tokens for a domain-specific language in transformers
Code for https://arxiv.org/abs/2111.11326 found: https://github.com/arthurdouillard/dytox
EfficientFormer
Posts with mentions or reviews of EfficientFormer.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-09-16.
-
A look at Apple’s new Transformer-powered predictive text model
I'm pretty fatigued on constantly providing references and sources in this thread but an example of what they've made availably publicly:
https://github.com/snap-research/EfficientFormer
-
Snap and Northeastern University Researchers Propose EfficientFormer: A Vision Transformer That Runs As Fast As MobileNet While Maintaining High Performance
Continue reading | Check out the paper, github
What are some alternatives?
When comparing dytox and EfficientFormer you can also consider the following projects:
CeiT - Implementation of Convolutional enhanced image Transformer
PyTorch-Model-Compare - Compare neural networks by their feature similarity
ml-cvnets - CVNets: A library for training computer vision networks
Efficient-AI-Backbones - Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
predictive-spy - Spying on Apple’s new predictive text model
llama.cpp - LLM inference in C/C++
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.