GPTQ-for-LLaMa VS Open-Assistant

Compare GPTQ-for-LLaMa vs Open-Assistant and see what are their differences.

GPTQ-for-LLaMa

4 bits quantization of LLMs using GPTQ (by 0cc4m)

Open-Assistant

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so. (by LAION-AI)
Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
GPTQ-for-LLaMa Open-Assistant
10 329
45 36,749
- 0.3%
8.2 7.4
11 months ago about 1 month ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

GPTQ-for-LLaMa

Posts with mentions or reviews of GPTQ-for-LLaMa. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-01.
  • How I made the pyg-charluv-13B model
    1 project | /r/Pygmalion_ai | 21 Jun 2023
    - Train your dataset - Add your created LORA to the model on the model tab - go back to the training tab perplexity and test if the perplexity is no more than about 0.3 higher than before. - If it is, lower the values in the training (epoch etc) and try again (first reload the model without the LoRA). If the perplexity is exactly the same, you can increase the values, it is trial and error. The lower the perplexity the better. - When you got a good LoRA that does not messs up the perplexity, you are done in Textgen, you can upload you LoRA to Huggingface - Merge using this gist https://gist.github.com/rondlite/c61a9eeb2904490abbc82ab6986cd5d0 (install the repo from the next step first and also `pip install peft`) Edit the gist so it has the right filenames for you project - Quantize to 4bits using https://github.com/0cc4m/GPTQ-for-LLaMa you need to change the command to ` CUDA_VISIBLE_DEVICES=0 python -m gptq.llama ./llama-hf/llama-7b c4 --wbits 4 --true-sequential --groupsize 128 --save llama7b-4bit.pt` ie remove act order and add groupsize (add order and groupsize dont work together) replace llama-hf/llama-7b with the name of your merged model It is important to use the GPTQ-for-LLaMA from the github since that is v1 and is the only one that works fast with Kobold - Test you end result for perplexity once more in Textgen. - In my case I had to do the entire process a few times over to finally get 5.109375 (which is on par with the original 13B model)
  • 👩🏻‍💻LLMs Mixes are here use Uncensored WizardLM+ MPT-7B storywriter
    1 project | /r/PygmalionAI | 13 May 2023
    has anyone figured out how to quantize mpt models, considering that someone already did it for one of the models https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g I tried using this github repository but I couldn't get it to work.
  • WizardLM-7B-Uncensored
    1 project | /r/LocalLLaMA | 5 May 2023
    Any plans to upload a version that works with https://github.com/0cc4m/GPTQ-for-LLaMa ? I noticed most of the recent GPTQ ones you've uploaded don't load in that, which I believe is the only way to use quantized models with KoboldAI at this time. I suspect your GPTQ models were quantized in too new of a version of GPTQ. What I've noticed is that if they're quantized in that 0cc4m GPTQ they will work in the latest Ooba, but not vice-versa.
  • Running LLaMa-7B-4bit?
    2 projects | /r/KoboldAI | 1 May 2023
    cd repos git clone https://github.com/0cc4m/GPTQ-for-LLaMa -b gptneox cd GPTQ-for-LLaMa python setup_cuda.py install cd .. cd ..
  • Stability AI Launches the First of its StableLM Suite of Language Models — Stability AI
    2 projects | /r/LocalLLaMA | 19 Apr 2023
    I would try with 0cc4m's fork. https://github.com/0cc4m/GPTQ-for-LLaMa
  • Alpaca 13B 4bit - load_quant() takes 3 positional arguments but 4 were given
    1 project | /r/KoboldAI | 14 Apr 2023
  • just curious, how do people on this sub run pyg?
    2 projects | /r/PygmalionAI | 31 Mar 2023
  • Any possibility to make Pygmalion 6B run in 4bit?
    4 projects | /r/PygmalionAI | 30 Mar 2023
    Now, where do I put the "GPTQ-for-LLaMa" folder?
  • Anyone already running LLaMA in KoboldAI?
    2 projects | /r/KoboldAI | 21 Mar 2023
    1) Download + Unzip https://github.com/0cc4m/KoboldAI/tree/4bit 2) Download + Extract all files from this repo into the KoboldAI-4bit/repos folder https://github.com/0cc4m/GPTQ-for-LLaMa/tree/gptneox 3) Run install_requirements.bat as administrator 4) When asked type 1 and hit enter 5) Unzip llama-7b-hf and/or llama-13b-hf into KoboldAI-4bit/models folder 6) Run play.sh as usual to start the Kobold interface 7) You can now select the 8bit models in the webui via "AI > Load a model from its directory"

Open-Assistant

Posts with mentions or reviews of Open-Assistant. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-08.

What are some alternatives?

When comparing GPTQ-for-LLaMa and Open-Assistant you can also consider the following projects:

KoboldAI

KoboldAI-Client

pygmalion.cpp - C/C++ implementation of PygmalionAI/pygmalion-6b

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

llama.cpp - LLM inference in C/C++

llama - Inference code for Llama models

gpt4all - gpt4all: run open-source LLMs anywhere

stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.

llama_index - LlamaIndex is a data framework for your LLM applications

Home Assistant - :house_with_garden: Open source home automation that puts local control and privacy first.

StableLM - StableLM: Stability AI Language Models

ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型

Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured