SqueezeLLM
[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization (by SqueezeAILab)
Qwen-7B
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud. [Moved to: https://github.com/QwenLM/Qwen] (by QwenLM)
SqueezeLLM | Qwen-7B | |
---|---|---|
5 | 2 | |
573 | 5,030 | |
4.0% | - | |
6.9 | 8.3 | |
16 days ago | 8 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SqueezeLLM
Posts with mentions or reviews of SqueezeLLM.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-07-05.
-
Llama33B vs Falcon40B vs MPT30B
Using the currently popular gptq the 3bit quantization hurts performance much more than 4bit, but there's also awq (https://github.com/mit-han-lab/llm-awq) and squishllm (https://github.com/SqueezeAILab/SqueezeLLM) which are able to manage 3bit without as much performance drop - I hope to see them used more commonly.
-
Has anyone tried out Squeezellm?
[Paper][Github][Model]
- SqueezeLLM: Dense-and-Sparse Quantization
- New quantization method SqueezeLLM allows for loseless compression for 3-bit and outperforms GPTQ and AWQ in both 3-bit and 4-bit. Quantized Vicuna and LLaMA models have been released.
Qwen-7B
Posts with mentions or reviews of Qwen-7B.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-08-07.
What are some alternatives?
When comparing SqueezeLLM and Qwen-7B you can also consider the following projects:
llm-awq - [MLSys 2024] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
EverythingApacheNiFi - EverythingApacheNiFi