llm-awq
InternLM
llm-awq | InternLM | |
---|---|---|
7 | 10 | |
1,902 | 5,275 | |
10.9% | 6.5% | |
8.0 | 9.0 | |
8 days ago | 29 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llm-awq
-
TinyChat: Large Language Model on the Edge
TinyChat is an efficient, lightweight, Python-native serving framework for 4-bit LLMs by AWQ. It delivers 2.3x generation speed up on RTX4090.
Code: https://github.com/mit-han-lab/llm-awq/tree/main/tinychat
- FLaNK Stack Weekly 23 Oct 2023
-
New base model InternLM 7B weights released, with 8k context window.
I am having trouble finding any 8bit GPTQ models at all, there don't seem to be any on HF it's almost all 4bit with the odd 3bit of the big ones. Suspect I will have to make my own for eval purposes but it's lower priority on my list then finding a 4bit that's GPU friendly but doesn't have such a performance penalty... Looking at AWQ they have 3 and 4bit versions.
-
Llama33B vs Falcon40B vs MPT30B
Using the currently popular gptq the 3bit quantization hurts performance much more than 4bit, but there's also awq (https://github.com/mit-han-lab/llm-awq) and squishllm (https://github.com/SqueezeAILab/SqueezeLLM) which are able to manage 3bit without as much performance drop - I hope to see them used more commonly.
- New hardware-friendly quantization method
-
Activation-Aware Weight Quantization for LLM Compression Outperforms GPTQ
Better quantization would have a direct and meaningful impact for everyone running local LLMs. The technique has already been applied to both Vicuna and the multimodal LLaMA variant LLaVA.
https://github.com/mit-han-lab/llm-awq
-
New quantization method AWQ outperforms GPTQ in 4-bit and 3-bit with 1.45x speedup and works with multimodal LLMs
GitHub: https://github.com/mit-han-lab/llm-awq
InternLM
- InternLM2
-
AI & Machine Learning in July 08th 2023: Recap
5- https://github.com/InternLM/InternLM
- New base model InternLM 7B weights released, with 8k context window.
-
InternLM – new open source 7B LLM
Maybe the license tag on Hugging Face is wrong? On GitHub, the README says:
> The code in this repository is open-source under the Apache-2.0 license. The InternLM weights are fully open for academic research and also allow commercial use with written permission from the official team. For inquiries about commercial licenses and collaborations, please contact [email protected].
https://github.com/InternLM/InternLM#open-source-license
- InternLM new open source 7B LLM
What are some alternatives?
SqueezeLLM - [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
pyllms - Minimal Python library to connect to LLMs (OpenAI, Anthropic, AI21, Cohere, Aleph Alpha, HuggingfaceHub, Google PaLM2, with a built-in model performance benchmark.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
InternLM-techreport
Voyager - An Open-Ended Embodied Agent with Large Language Models
langchain4j-examples
CML_AMP_AI_Text_Summarization_with_Amazon_Bedrock - CML_AMP_AI_Text_Summarization_with_Amazon_Bedrock
kafka-streams-dashboards - showcases Grafana dashboards for Kafka Stream applications leveraging client JMX metrics.
data-in-motion - This is repository for tutorials of Data In Motion starting with Data Distribution
pejorative-compounds - Analysing patterns in English noun-noun pejorative compounds on Reddit
optiagent - autonomous agents for competitive intelligence!
stable-audio-tools - Generative models for conditional audio generation