GPTQ-for-LLaMa
Open-Assistant
GPTQ-for-LLaMa | Open-Assistant | |
---|---|---|
10 | 329 | |
45 | 36,749 | |
- | 0.3% | |
8.2 | 7.4 | |
11 months ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GPTQ-for-LLaMa
-
How I made the pyg-charluv-13B model
- Train your dataset - Add your created LORA to the model on the model tab - go back to the training tab perplexity and test if the perplexity is no more than about 0.3 higher than before. - If it is, lower the values in the training (epoch etc) and try again (first reload the model without the LoRA). If the perplexity is exactly the same, you can increase the values, it is trial and error. The lower the perplexity the better. - When you got a good LoRA that does not messs up the perplexity, you are done in Textgen, you can upload you LoRA to Huggingface - Merge using this gist https://gist.github.com/rondlite/c61a9eeb2904490abbc82ab6986cd5d0 (install the repo from the next step first and also `pip install peft`) Edit the gist so it has the right filenames for you project - Quantize to 4bits using https://github.com/0cc4m/GPTQ-for-LLaMa you need to change the command to ` CUDA_VISIBLE_DEVICES=0 python -m gptq.llama ./llama-hf/llama-7b c4 --wbits 4 --true-sequential --groupsize 128 --save llama7b-4bit.pt` ie remove act order and add groupsize (add order and groupsize dont work together) replace llama-hf/llama-7b with the name of your merged model It is important to use the GPTQ-for-LLaMA from the github since that is v1 and is the only one that works fast with Kobold - Test you end result for perplexity once more in Textgen. - In my case I had to do the entire process a few times over to finally get 5.109375 (which is on par with the original 13B model)
-
👩🏻💻LLMs Mixes are here use Uncensored WizardLM+ MPT-7B storywriter
has anyone figured out how to quantize mpt models, considering that someone already did it for one of the models https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g I tried using this github repository but I couldn't get it to work.
-
WizardLM-7B-Uncensored
Any plans to upload a version that works with https://github.com/0cc4m/GPTQ-for-LLaMa ? I noticed most of the recent GPTQ ones you've uploaded don't load in that, which I believe is the only way to use quantized models with KoboldAI at this time. I suspect your GPTQ models were quantized in too new of a version of GPTQ. What I've noticed is that if they're quantized in that 0cc4m GPTQ they will work in the latest Ooba, but not vice-versa.
-
Running LLaMa-7B-4bit?
cd repos git clone https://github.com/0cc4m/GPTQ-for-LLaMa -b gptneox cd GPTQ-for-LLaMa python setup_cuda.py install cd .. cd ..
-
Stability AI Launches the First of its StableLM Suite of Language Models — Stability AI
I would try with 0cc4m's fork. https://github.com/0cc4m/GPTQ-for-LLaMa
- Alpaca 13B 4bit - load_quant() takes 3 positional arguments but 4 were given
- just curious, how do people on this sub run pyg?
-
Any possibility to make Pygmalion 6B run in 4bit?
Now, where do I put the "GPTQ-for-LLaMa" folder?
-
Anyone already running LLaMA in KoboldAI?
1) Download + Unzip https://github.com/0cc4m/KoboldAI/tree/4bit 2) Download + Extract all files from this repo into the KoboldAI-4bit/repos folder https://github.com/0cc4m/GPTQ-for-LLaMa/tree/gptneox 3) Run install_requirements.bat as administrator 4) When asked type 1 and hit enter 5) Unzip llama-7b-hf and/or llama-13b-hf into KoboldAI-4bit/models folder 6) Run play.sh as usual to start the Kobold interface 7) You can now select the 8bit models in the webui via "AI > Load a model from its directory"
Open-Assistant
-
Best open source AI chatbot alternative?
For open assistant, the code: https://github.com/LAION-AI/Open-Assistant/tree/main/inference
-
GPT-4 Turbo for free with no sign up, and most importantly no Bing
Is this being used to collect chat results for synthetic data and/or training like https://github.com/LAION-AI/Open-Assistant did? I believe they gave away GPT-4 api calls via a text interface and absorbed the cost to later build a dataset of chats.
-
OpenAI now sends email threats?!
https://open-assistant.io seems to have the same guardrails, as ChatGPT. Tried it on several prompts and it wouldn't comply.
- ChatGPT-Antworten nach Schulnoten bewerten
-
Chat GPT Alternatives?
Open-Assistant [https://open-assistant.io/]
-
What are the best AI tools you've ACTUALLY used?
Open Assistant by LAION AI on GitHub
-
Keep Artificial Intelligence Free, protect it from monopolies: please sign this petition
To add to this if you want something for free or at least close to free, contribute to OpenSource projects like https://open-assistant.io/
-
If I had to get someone from total zero to ChatGPT power user
Also, there are fairly useful alternatives like GPT4ALL and Open Assistant that you can run locally.
-
Compiling a Comprehensive List of Publicly Usable LLM Q&A Services - Need Your Input!
https://open-assistant.io - oasst-sft-6-llama-30b
- Proposal for a Crowd-Sourced AI Feedback System
What are some alternatives?
KoboldAI
KoboldAI-Client
pygmalion.cpp - C/C++ implementation of PygmalionAI/pygmalion-6b
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama.cpp - LLM inference in C/C++
llama - Inference code for Llama models
gpt4all - gpt4all: run open-source LLMs anywhere
stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.
llama_index - LlamaIndex is a data framework for your LLM applications
Home Assistant - :house_with_garden: Open source home automation that puts local control and privacy first.
StableLM - StableLM: Stability AI Language Models
ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型