KoboldAI VS GPTQ-for-LLaMa

Compare KoboldAI vs GPTQ-for-LLaMa and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
KoboldAI GPTQ-for-LLaMa
58 10
150 44
- -
8.6 8.2
7 months ago 10 months ago
Python Python
GNU Affero General Public License v3.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

KoboldAI

Posts with mentions or reviews of KoboldAI. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-09.

GPTQ-for-LLaMa

Posts with mentions or reviews of GPTQ-for-LLaMa. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-01.
  • How I made the pyg-charluv-13B model
    1 project | /r/Pygmalion_ai | 21 Jun 2023
    - Train your dataset - Add your created LORA to the model on the model tab - go back to the training tab perplexity and test if the perplexity is no more than about 0.3 higher than before. - If it is, lower the values in the training (epoch etc) and try again (first reload the model without the LoRA). If the perplexity is exactly the same, you can increase the values, it is trial and error. The lower the perplexity the better. - When you got a good LoRA that does not messs up the perplexity, you are done in Textgen, you can upload you LoRA to Huggingface - Merge using this gist https://gist.github.com/rondlite/c61a9eeb2904490abbc82ab6986cd5d0 (install the repo from the next step first and also `pip install peft`) Edit the gist so it has the right filenames for you project - Quantize to 4bits using https://github.com/0cc4m/GPTQ-for-LLaMa you need to change the command to ` CUDA_VISIBLE_DEVICES=0 python -m gptq.llama ./llama-hf/llama-7b c4 --wbits 4 --true-sequential --groupsize 128 --save llama7b-4bit.pt` ie remove act order and add groupsize (add order and groupsize dont work together) replace llama-hf/llama-7b with the name of your merged model It is important to use the GPTQ-for-LLaMA from the github since that is v1 and is the only one that works fast with Kobold - Test you end result for perplexity once more in Textgen. - In my case I had to do the entire process a few times over to finally get 5.109375 (which is on par with the original 13B model)
  • 👩🏻‍💻LLMs Mixes are here use Uncensored WizardLM+ MPT-7B storywriter
    1 project | /r/PygmalionAI | 13 May 2023
    has anyone figured out how to quantize mpt models, considering that someone already did it for one of the models https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g I tried using this github repository but I couldn't get it to work.
  • WizardLM-7B-Uncensored
    1 project | /r/LocalLLaMA | 5 May 2023
    Any plans to upload a version that works with https://github.com/0cc4m/GPTQ-for-LLaMa ? I noticed most of the recent GPTQ ones you've uploaded don't load in that, which I believe is the only way to use quantized models with KoboldAI at this time. I suspect your GPTQ models were quantized in too new of a version of GPTQ. What I've noticed is that if they're quantized in that 0cc4m GPTQ they will work in the latest Ooba, but not vice-versa.
  • Running LLaMa-7B-4bit?
    2 projects | /r/KoboldAI | 1 May 2023
    cd repos git clone https://github.com/0cc4m/GPTQ-for-LLaMa -b gptneox cd GPTQ-for-LLaMa python setup_cuda.py install cd .. cd ..
  • Stability AI Launches the First of its StableLM Suite of Language Models — Stability AI
    2 projects | /r/LocalLLaMA | 19 Apr 2023
    I would try with 0cc4m's fork. https://github.com/0cc4m/GPTQ-for-LLaMa
  • Alpaca 13B 4bit - load_quant() takes 3 positional arguments but 4 were given
    1 project | /r/KoboldAI | 14 Apr 2023
  • just curious, how do people on this sub run pyg?
    2 projects | /r/PygmalionAI | 31 Mar 2023
  • Any possibility to make Pygmalion 6B run in 4bit?
    4 projects | /r/PygmalionAI | 30 Mar 2023
    Now, where do I put the "GPTQ-for-LLaMa" folder?
  • Anyone already running LLaMA in KoboldAI?
    2 projects | /r/KoboldAI | 21 Mar 2023
    1) Download + Unzip https://github.com/0cc4m/KoboldAI/tree/4bit 2) Download + Extract all files from this repo into the KoboldAI-4bit/repos folder https://github.com/0cc4m/GPTQ-for-LLaMa/tree/gptneox 3) Run install_requirements.bat as administrator 4) When asked type 1 and hit enter 5) Unzip llama-7b-hf and/or llama-13b-hf into KoboldAI-4bit/models folder 6) Run play.sh as usual to start the Kobold interface 7) You can now select the 8bit models in the webui via "AI > Load a model from its directory"

What are some alternatives?

When comparing KoboldAI and GPTQ-for-LLaMa you can also consider the following projects:

koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI

pygmalion.cpp - C/C++ implementation of PygmalionAI/pygmalion-6b

exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.

Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)

SillyTavern - LLM Frontend for Power Users.

llama-cpp-python - Python bindings for llama.cpp

KoboldAI - KoboldAI is generative AI software optimized for fictional use, but capable of much more!

exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.

KoboldAI-Horde-Bridge - Turns KoboldAI into a crowdsourced distributed cluster

KoboldAI-Client

SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]