GPTQ-for-LLaMa

4 bits quantization of LLMs using GPTQ (by 0cc4m)

GPTQ-for-LLaMa Alternatives

Similar projects and alternatives to GPTQ-for-LLaMa

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better GPTQ-for-LLaMa alternative or higher similarity.

GPTQ-for-LLaMa reviews and mentions

Posts with mentions or reviews of GPTQ-for-LLaMa. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-01.
  • How I made the pyg-charluv-13B model
    1 project | /r/Pygmalion_ai | 21 Jun 2023
    - Train your dataset - Add your created LORA to the model on the model tab - go back to the training tab perplexity and test if the perplexity is no more than about 0.3 higher than before. - If it is, lower the values in the training (epoch etc) and try again (first reload the model without the LoRA). If the perplexity is exactly the same, you can increase the values, it is trial and error. The lower the perplexity the better. - When you got a good LoRA that does not messs up the perplexity, you are done in Textgen, you can upload you LoRA to Huggingface - Merge using this gist https://gist.github.com/rondlite/c61a9eeb2904490abbc82ab6986cd5d0 (install the repo from the next step first and also `pip install peft`) Edit the gist so it has the right filenames for you project - Quantize to 4bits using https://github.com/0cc4m/GPTQ-for-LLaMa you need to change the command to ` CUDA_VISIBLE_DEVICES=0 python -m gptq.llama ./llama-hf/llama-7b c4 --wbits 4 --true-sequential --groupsize 128 --save llama7b-4bit.pt` ie remove act order and add groupsize (add order and groupsize dont work together) replace llama-hf/llama-7b with the name of your merged model It is important to use the GPTQ-for-LLaMA from the github since that is v1 and is the only one that works fast with Kobold - Test you end result for perplexity once more in Textgen. - In my case I had to do the entire process a few times over to finally get 5.109375 (which is on par with the original 13B model)
  • 👩🏻‍💻LLMs Mixes are here use Uncensored WizardLM+ MPT-7B storywriter
    1 project | /r/PygmalionAI | 13 May 2023
    has anyone figured out how to quantize mpt models, considering that someone already did it for one of the models https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g I tried using this github repository but I couldn't get it to work.
  • WizardLM-7B-Uncensored
    1 project | /r/LocalLLaMA | 5 May 2023
    Any plans to upload a version that works with https://github.com/0cc4m/GPTQ-for-LLaMa ? I noticed most of the recent GPTQ ones you've uploaded don't load in that, which I believe is the only way to use quantized models with KoboldAI at this time. I suspect your GPTQ models were quantized in too new of a version of GPTQ. What I've noticed is that if they're quantized in that 0cc4m GPTQ they will work in the latest Ooba, but not vice-versa.
  • Running LLaMa-7B-4bit?
    2 projects | /r/KoboldAI | 1 May 2023
    cd repos git clone https://github.com/0cc4m/GPTQ-for-LLaMa -b gptneox cd GPTQ-for-LLaMa python setup_cuda.py install cd .. cd ..
  • Stability AI Launches the First of its StableLM Suite of Language Models — Stability AI
    2 projects | /r/LocalLLaMA | 19 Apr 2023
    I would try with 0cc4m's fork. https://github.com/0cc4m/GPTQ-for-LLaMa
  • Alpaca 13B 4bit - load_quant() takes 3 positional arguments but 4 were given
    1 project | /r/KoboldAI | 14 Apr 2023
  • just curious, how do people on this sub run pyg?
    2 projects | /r/PygmalionAI | 31 Mar 2023
  • Any possibility to make Pygmalion 6B run in 4bit?
    4 projects | /r/PygmalionAI | 30 Mar 2023
    Now, where do I put the "GPTQ-for-LLaMa" folder?
  • Anyone already running LLaMA in KoboldAI?
    2 projects | /r/KoboldAI | 21 Mar 2023
    1) Download + Unzip https://github.com/0cc4m/KoboldAI/tree/4bit 2) Download + Extract all files from this repo into the KoboldAI-4bit/repos folder https://github.com/0cc4m/GPTQ-for-LLaMa/tree/gptneox 3) Run install_requirements.bat as administrator 4) When asked type 1 and hit enter 5) Unzip llama-7b-hf and/or llama-13b-hf into KoboldAI-4bit/models folder 6) Run play.sh as usual to start the Kobold interface 7) You can now select the 8bit models in the webui via "AI > Load a model from its directory"
  • A note from our sponsor - InfluxDB
    www.influxdata.com | 16 May 2024
    Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →

Stats

Basic GPTQ-for-LLaMa repo stats
10
44
8.2
10 months ago

0cc4m/GPTQ-for-LLaMa is an open source project licensed under Apache License 2.0 which is an OSI approved license.

The primary programming language of GPTQ-for-LLaMa is Python.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com