GPTQ-for-LLaMa
KoboldAI | GPTQ-for-LLaMa | |
---|---|---|
58 | 10 | |
150 | 44 | |
- | - | |
8.6 | 8.2 | |
7 months ago | 10 months ago | |
Python | Python | |
GNU Affero General Public License v3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
KoboldAI
- Any good models with 6gb vram?
- for some reasons, i can't download AI models from Kobold, how can i download them individually?
-
ChatGPT users drop for the first time as people turn to uncensored chatbots
Pygmalion for chatting, Erebus for story writing, Wizard Vicuna Uncensored for general use including chatting, story writing or instructing, to name a few. There's lots more, but most are just the raw models that users are expected to load themselves so there isn't anything quite like ChatGPT where you just load up a single website and have all of them there. You'll either have to set up a local install using KoboldAI for running only on GPU, KoboldCPP for running only on CPU with optional splitting between CPU and GPU, or Oobabooga for CPU, GPU and splitting between CPU and GPU, if you have a powerful enough PC to run these models yourself (a PC with a 3090 can load up to a 30B model entirely in GPU, or a 65B model if you have 64GBs of RAM and a decently powerful CPU). If you don't have a powerful enough PC then you'll have to use something like KoboldAI Lite (website version) or SillyTavern to use the horde, a crowdsourced AI chatbot/LLM runner that lets people provide their hardware for others to use.
- Alr boys, how do I sign into Kobold?
- Poe down so here a meme
- GPU running out of memory despite meeting requirements?
- Any way to get a 13b model running on a 4070?
-
Remote play .bat doesn't work for me how do i fix it?
INFO | __main__:general_startup:1312 - Running on Repo: https://github.com/0cc4m/koboldai Branch: latestgptq
- Help with KoboldAI API not generating responses
-
Anyone tried this promising sounding release? WizardLM-33B-V1.0-Uncensored-SUPERHOT-8K
Occam seems to be trying or adding that into Kobold (https://github.com/0cc4m/KoboldAI/tree/4bit-plugin)
GPTQ-for-LLaMa
-
How I made the pyg-charluv-13B model
- Train your dataset - Add your created LORA to the model on the model tab - go back to the training tab perplexity and test if the perplexity is no more than about 0.3 higher than before. - If it is, lower the values in the training (epoch etc) and try again (first reload the model without the LoRA). If the perplexity is exactly the same, you can increase the values, it is trial and error. The lower the perplexity the better. - When you got a good LoRA that does not messs up the perplexity, you are done in Textgen, you can upload you LoRA to Huggingface - Merge using this gist https://gist.github.com/rondlite/c61a9eeb2904490abbc82ab6986cd5d0 (install the repo from the next step first and also `pip install peft`) Edit the gist so it has the right filenames for you project - Quantize to 4bits using https://github.com/0cc4m/GPTQ-for-LLaMa you need to change the command to ` CUDA_VISIBLE_DEVICES=0 python -m gptq.llama ./llama-hf/llama-7b c4 --wbits 4 --true-sequential --groupsize 128 --save llama7b-4bit.pt` ie remove act order and add groupsize (add order and groupsize dont work together) replace llama-hf/llama-7b with the name of your merged model It is important to use the GPTQ-for-LLaMA from the github since that is v1 and is the only one that works fast with Kobold - Test you end result for perplexity once more in Textgen. - In my case I had to do the entire process a few times over to finally get 5.109375 (which is on par with the original 13B model)
-
👩🏻💻LLMs Mixes are here use Uncensored WizardLM+ MPT-7B storywriter
has anyone figured out how to quantize mpt models, considering that someone already did it for one of the models https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g I tried using this github repository but I couldn't get it to work.
-
WizardLM-7B-Uncensored
Any plans to upload a version that works with https://github.com/0cc4m/GPTQ-for-LLaMa ? I noticed most of the recent GPTQ ones you've uploaded don't load in that, which I believe is the only way to use quantized models with KoboldAI at this time. I suspect your GPTQ models were quantized in too new of a version of GPTQ. What I've noticed is that if they're quantized in that 0cc4m GPTQ they will work in the latest Ooba, but not vice-versa.
-
Running LLaMa-7B-4bit?
cd repos git clone https://github.com/0cc4m/GPTQ-for-LLaMa -b gptneox cd GPTQ-for-LLaMa python setup_cuda.py install cd .. cd ..
-
Stability AI Launches the First of its StableLM Suite of Language Models — Stability AI
I would try with 0cc4m's fork. https://github.com/0cc4m/GPTQ-for-LLaMa
- Alpaca 13B 4bit - load_quant() takes 3 positional arguments but 4 were given
- just curious, how do people on this sub run pyg?
-
Any possibility to make Pygmalion 6B run in 4bit?
Now, where do I put the "GPTQ-for-LLaMa" folder?
-
Anyone already running LLaMA in KoboldAI?
1) Download + Unzip https://github.com/0cc4m/KoboldAI/tree/4bit 2) Download + Extract all files from this repo into the KoboldAI-4bit/repos folder https://github.com/0cc4m/GPTQ-for-LLaMa/tree/gptneox 3) Run install_requirements.bat as administrator 4) When asked type 1 and hit enter 5) Unzip llama-7b-hf and/or llama-13b-hf into KoboldAI-4bit/models folder 6) Run play.sh as usual to start the Kobold interface 7) You can now select the 8bit models in the webui via "AI > Load a model from its directory"
What are some alternatives?
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
pygmalion.cpp - C/C++ implementation of PygmalionAI/pygmalion-6b
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
SillyTavern - LLM Frontend for Power Users.
llama-cpp-python - Python bindings for llama.cpp
KoboldAI - KoboldAI is generative AI software optimized for fictional use, but capable of much more!
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
KoboldAI-Horde-Bridge - Turns KoboldAI into a crowdsourced distributed cluster
KoboldAI-Client
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]