FQ-ViT
[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer (by megvii-research)
transformer-quantization
By Qualcomm-AI-research
FQ-ViT | transformer-quantization | |
---|---|---|
2 | 1 | |
263 | 166 | |
0.4% | 6.6% | |
1.1 | 0.0 | |
about 1 year ago | over 2 years ago | |
Python | Python | |
Apache License 2.0 | BSD 3-clause Clear License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FQ-ViT
Posts with mentions or reviews of FQ-ViT.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-06-20.
-
How to quantize a Swin transformer model?
This my implementation on the approach I shared( https://github.com/megvii-research/FQ-ViT ) on a small dataset from kaggle(link: https://www.kaggle.com/datasets/gauravduttakiit/ants-bees) in this notebook :https://colab.research.google.com/drive/1cqnmosPIVZu3e2SwbO_VbevANk5MppVS?usp=sharing
transformer-quantization
Posts with mentions or reviews of transformer-quantization.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-06-20.
What are some alternatives?
When comparing FQ-ViT and transformer-quantization you can also consider the following projects:
Efficient-AI-Backbones - Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
Sparsebit - A model compression and acceleration toolbox based on pytorch.