LLaMA-8bit-LoRA
Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only. (by serp-ai)
sparsegpt-for-LLaMA
Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation. (by AlpinDale)
LLaMA-8bit-LoRA | sparsegpt-for-LLaMA | |
---|---|---|
3 | 3 | |
145 | 65 | |
0.7% | - | |
5.1 | 5.2 | |
8 months ago | about 1 year ago | |
Python | Python | |
- | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLaMA-8bit-LoRA
Posts with mentions or reviews of LLaMA-8bit-LoRA.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-04-06.
-
Any news on training LoRAs in 4-bit mode?
https://github.com/serp-ai/LLaMA-8bit-LoRA/blob/main/docs/merging_the_weights.md < merge models
- [R] 🤖🌟 Unlock the Power of Personal AI: Introducing ChatLLaMA, Your Custom Personal Assistant! 🚀💬
sparsegpt-for-LLaMA
Posts with mentions or reviews of sparsegpt-for-LLaMA.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-03.
-
SparseGPT: Language Models Can Be Accurately Pruned in One-Shot
https://github.com/AlpinDale/sparsegpt-for-LLaMA
> # Prune to 50\% + 4-bit with SparseGPT -- Currently not working
- [R] 🤖🌟 Unlock the Power of Personal AI: Introducing ChatLLaMA, Your Custom Personal Assistant! 🚀💬
What are some alternatives?
When comparing LLaMA-8bit-LoRA and sparsegpt-for-LLaMA you can also consider the following projects:
alpaca-lora - Instruct-tune LLaMA on consumer hardware
serge - A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
text-generation-webui-testing - A fork of textgen that still supports V1 GPTQ, 4-bit lora and other GPTQ models besides llama.
Sparsebit - A model compression and acceleration toolbox based on pytorch.
trl - Train transformer language models with reinforcement learning.
alpaca_lora_4bit
llama.cpp - LLM inference in C/C++
sparsegpt - Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".
LLaMA-8bit-LoRA vs alpaca-lora
sparsegpt-for-LLaMA vs serge
LLaMA-8bit-LoRA vs text-generation-webui-testing
sparsegpt-for-LLaMA vs Sparsebit
LLaMA-8bit-LoRA vs trl
sparsegpt-for-LLaMA vs trl
LLaMA-8bit-LoRA vs Sparsebit
sparsegpt-for-LLaMA vs alpaca-lora
LLaMA-8bit-LoRA vs alpaca_lora_4bit
sparsegpt-for-LLaMA vs llama.cpp
sparsegpt-for-LLaMA vs sparsegpt