GPTQ-for-LLaMa
SillyTavern
GPTQ-for-LLaMa | SillyTavern | |
---|---|---|
19 | 76 | |
129 | 5,930 | |
- | 8.5% | |
7.7 | 10.0 | |
11 months ago | 3 days ago | |
Python | JavaScript | |
- | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GPTQ-for-LLaMa
-
I have tried various different methods to install, and none work. Can you spoon-feed me how?
git clone https://github.com/oobabooga/GPTQ-for-LLaMa
-
Query output random text
If you're using the model directly from ehartford, that one hasn't been quantized. Try using the GPTQ quantized version here, and use this fork of GPTQ-for-LLaMa. Load in 4-bit with --wbits 4
-
Help needed with installing quant_cuda for the WebUI
This worked for me on Ubuntu. If you want to use the CUDA branch instead of triton, do the same steps except clone this GPTQ-for-LLaMa fork and run python setup_cuda.py install
-
AutoGPTQ vs GPTQ-for-llama?
If you don't have triton and you use AutoGPTQ you're gonna notice a huge slow down compared to the old GPTQ-for-LLaMA cuda branch. For me AutoGPTQ gives me a whopping 1 token per second compared to the old GPTQ that gives me a decent 9 tokens per second.. both times I used a same sized model. (I think the slowdown is due to AutoGPTQ using the newer cuda branch which is much slower than the old one)
-
Guanaco 7B, 13B, 33B and 65B models by Tim Dettmers: now for your local LLM pleasure
Are you using a later version of GPTQ-for-LLaMa? If so, go to ooba's CUDA fork (https://github.com/oobabooga/GPTQ-for-LLaMa). That's what I made it in and it definitely works with that. And that's what's included in the one-click-installers.
-
Any idea Vicuna 13B 4bit model output random content?
This usually happens when using models that conflict with your GPTQ installation. You should be using this fork: https://github.com/oobabooga/GPTQ-for-LLaMa. If you did the manual installation wrong, use the one click installer instead.
-
GPT4All: A little helper to get started
cd text-generation-webui # wherever you have it installed mkdir -p repositories cd repositories git clone https://github.com/oobabooga/GPTQ-for-LLaMa -b cuda GPTQ-for-LLaMa cd GPTQ-for-LLaMa python setup_cuda install
- wizard-vicuna-13B β’ Hugging Face
-
Anyone actually running 30b/65b at reasonably high speed? What's your rig?
I'm on GPTQ for LLaMA folder under repositories says it's pointed at https://github.com/oobabooga/GPTQ-for-LLaMa.git. But I've run through the instructions and also applied the monkey patch to train and apply 4 bit lora which may come into play. No idea.
-
Trying to run TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g with latest GPTQ-for-LLaMa CUDA branch
git clone https://github.com/oobabooga/GPTQ-for-LLaMa.git -b cuda
SillyTavern
-
Claude 3 beats GPT-4 on Aider's code editing benchmark β aider
Right, but it's certainly easier for people who might not even know what "API" stands for, and that's quite nifty. As far as self-hosted frontends go, I can personally recommend SillyTavern[1] in the browser, ChatterUI[2] on mobile, and ShellGPT[3] for CLI. LobeChat looks pretty cool, though! I'll definitely check it out.
[1] https://github.com/SillyTavern/SillyTavern
[2] https://github.com/Vali-98/ChatterUI
[3] https://github.com/TheR1D/shell_gpt
- FLaNK AI for 11 March 2024
- Show HN: I made an app to use local AI as daily driver
-
Group chats vs online defined characters, token efficiency question
I don't think there is any enumeration for {{char}} macros. Here is some good discussion on the subject.
- SillyTavern 1.11.0 has been released
-
Is possible to run local voice chat agent? If yes what GPU do i Need with 500β¬ budget?
As for SillyTavern, you need the main SillyTavern frontend and SillyTavern-extras (for TTS, STT, etc.) They're pretty easy to install. SillyTavern connects to oobabooga and SillyTavern-extras via API.
-
What do you use to run your models?
Finally, no matter what backend I use, I need it to be compatible with my power-user frontend, SillyTavern. That way I always use the same UI, with the characters I created and extensions I want, e. g. web search, XTTS text-to-speech and Whisper speech recognition for real-time voice chat - and all of that local!
- SillyTavern 1.10.10 has been released
- LM Studio β Discover, download, and run local LLMs
-
πΊπ¦ββ¬ LLM Comparison/Test: Mistral 7B Updates (OpenHermes 2.5, OpenChat 3.5, Nous Capybara 1.9)
SillyTavern v1.10.5 frontend (not the latest as I don't want to upgrade mid-test)
What are some alternatives?
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
TavernAI - TavernAI for nerds [Moved to: https://github.com/Cohee1207/SillyTavern]
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
character-editor - Create, edit and convert AI character files for CharacterAI, Pygmalion, Text Generation, KoboldAI and TavernAI
langflow - βοΈ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
SillyTavern-extras - Extensions API for SillyTavern [Moved to: https://github.com/SillyTavern/SillyTavern-extras]
one-click-installers - Simplified installers for oobabooga/text-generation-webui.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
SillyTavern-Extras - Extensions API for SillyTavern.