GPTQ-for-LLaMa
Local-LLM-Comparison-Colab-UI
Our great sponsors
GPTQ-for-LLaMa | Local-LLM-Comparison-Colab-UI | |
---|---|---|
19 | 20 | |
129 | 868 | |
- | - | |
7.7 | 9.1 | |
11 months ago | 3 days ago | |
Python | Jupyter Notebook | |
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GPTQ-for-LLaMa
-
I have tried various different methods to install, and none work. Can you spoon-feed me how?
git clone https://github.com/oobabooga/GPTQ-for-LLaMa
-
Query output random text
If you're using the model directly from ehartford, that one hasn't been quantized. Try using the GPTQ quantized version here, and use this fork of GPTQ-for-LLaMa. Load in 4-bit with --wbits 4
-
Help needed with installing quant_cuda for the WebUI
This worked for me on Ubuntu. If you want to use the CUDA branch instead of triton, do the same steps except clone this GPTQ-for-LLaMa fork and run python setup_cuda.py install
-
AutoGPTQ vs GPTQ-for-llama?
If you don't have triton and you use AutoGPTQ you're gonna notice a huge slow down compared to the old GPTQ-for-LLaMA cuda branch. For me AutoGPTQ gives me a whopping 1 token per second compared to the old GPTQ that gives me a decent 9 tokens per second.. both times I used a same sized model. (I think the slowdown is due to AutoGPTQ using the newer cuda branch which is much slower than the old one)
-
Guanaco 7B, 13B, 33B and 65B models by Tim Dettmers: now for your local LLM pleasure
Are you using a later version of GPTQ-for-LLaMa? If so, go to ooba's CUDA fork (https://github.com/oobabooga/GPTQ-for-LLaMa). That's what I made it in and it definitely works with that. And that's what's included in the one-click-installers.
-
Any idea Vicuna 13B 4bit model output random content?
This usually happens when using models that conflict with your GPTQ installation. You should be using this fork: https://github.com/oobabooga/GPTQ-for-LLaMa. If you did the manual installation wrong, use the one click installer instead.
-
GPT4All: A little helper to get started
cd text-generation-webui # wherever you have it installed mkdir -p repositories cd repositories git clone https://github.com/oobabooga/GPTQ-for-LLaMa -b cuda GPTQ-for-LLaMa cd GPTQ-for-LLaMa python setup_cuda install
- wizard-vicuna-13B • Hugging Face
-
Anyone actually running 30b/65b at reasonably high speed? What's your rig?
I'm on GPTQ for LLaMA folder under repositories says it's pointed at https://github.com/oobabooga/GPTQ-for-LLaMa.git. But I've run through the instructions and also applied the monkey patch to train and apply 4 bit lora which may come into play. No idea.
-
Trying to run TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g with latest GPTQ-for-LLaMa CUDA branch
git clone https://github.com/oobabooga/GPTQ-for-LLaMa.git -b cuda
Local-LLM-Comparison-Colab-UI
- Mistral 7B OpenOrca outclasses Llama 2 13B variants
-
GPT-4 API general availability
In terms of speed, we're talking about 140t/s for 7B models, and 40t/s for 33B models on a 3090/4090 now.[1] (1 token ~= 0.75 word) It's quite zippy. llama.cpp performs close on Nvidia GPUs now (but they don't have a handy chart) and you can get decent performance on 13B models on M1/M2 Macs.
You can take a look at a list of evals here: https://llm-tracker.info/books/evals/page/list-of-evals - for general usage, I think home-rolled evals like llm-jeopardy [2] and local-llm-comparison [3] by hobbyists are more useful than most of the benchmark rankings.
That being said, personally I mostly use GPT-4 for code assistance to that's what I'm most interested in, and the latest code assistants are scoring quite well: https://github.com/abacaj/code-eval - a recent replit-3b fine tune the human-eval results for open models (as a point of reference, GPT-3.5 gets 60.4 on pass@1 and 68.9 on pass@10 [4]) - I've only just started playing around with it since replit model tooling is not as good as llamas (doc here: https://llm-tracker.info/books/howto-guides/page/replit-mode...).
I'm interested in potentially applying reflexion or some of the other techniques that have been tried to even further increase coding abilities. (InterCode in particular has caught my eye https://intercode-benchmark.github.io/)
[1] https://github.com/turboderp/exllama#results-so-far
[2] https://github.com/aigoopy/llm-jeopardy
[3] https://github.com/Troyanovsky/Local-LLM-comparison/tree/mai...
[4] https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder
-
Best 7B model
The best 7B I tried is WizardLM. It's my go-to model.
-
UltraLM-13B reaches top of AlpacaEval leaderboard
If you want to try it out, you can use Google Colab here with Oobabooga Text Generation UI: Link (Remember to check the instruction template and generation parameters)
-
wizardLM-7B.q4_2
I'm really impressed by wizardLM-7B.q4_2 (GPT4all) running on my 8gb M2 Mac Air. Fast response, fewer hallucinations than other 7B models I've tried. GPT4All's beta document collection and query function is respectable--going to test it more tomorrow. FWIW wizardLM-7B.q4_2 was ranked very high here https://github.com/Troyanovsky/Local-LLM-comparison.
-
Help me discover new LLMs for school project
I made a series of Colab notebooks for different models: https://github.com/Troyanovsky/Local-LLM-comparison
-
Nous Hermes 13b is very good.
I found it performing very well too in my testing (Repo). It's my second favorite model after WizardLM-13B.
- How to train 7B models with small documents?
-
What are your favorite LLMs?
My entire list at: Local LLM Comparison Repo
-
Announcing Nous-Hermes-13b (info link in thread)
I just tried HyperMantis and updated the results in the repo. It performs not bad but worse than Nous-Hermes-13B.
What are some alternatives?
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
langflow - ⛓️ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
simple-proxy-for-tavern
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
one-click-installers - Simplified installers for oobabooga/text-generation-webui.
alpaca_eval - An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
can-ai-code - Self-evaluating interview for AI coders