GPTQ-for-LLaMa VS SillyTavern

Compare GPTQ-for-LLaMa vs SillyTavern and see what are their differences.

GPTQ-for-LLaMa

4 bits quantization of LLaMa using GPTQ (by oobabooga)

SillyTavern

LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern] (by Cohee1207)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
GPTQ-for-LLaMa SillyTavern
19 75
129 677
- -
7.7 10.0
11 months ago 11 months ago
Python JavaScript
- GNU Affero General Public License v3.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

GPTQ-for-LLaMa

Posts with mentions or reviews of GPTQ-for-LLaMa. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-11.

SillyTavern

Posts with mentions or reviews of SillyTavern. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-03.
  • Help😢
    1 project | /r/SillyTavernAI | 3 Jul 2023
    Go to Termix and click Exit. Then go to Termux and code 1. Apk update 2. Apk upgrade 3. git clone https://github.com/Cohee1207/SillyTavern 4. cd SillyTavern 5. Install nodejs 6. Npm install 7. Node server
  • Oogabooga and llama.cpp in longer conversations answers take forever.....
    5 projects | /r/LocalLLaMA | 3 Jul 2023
    If you want the best roleplaying experience, I can only recommend SillyTavern with SillyTavern/SillyTavern-extras. The extras include summarization and ChromaDB, both helping to get longer and more coherent chats.
  • koboldcpp-1.33 Ultimate Edition released!
    4 projects | /r/LocalLLaMA | 29 Jun 2023
    Really? Then we definitely have different experiences (or different ways to interact) with Guanaco. It's been the most unrestricted model I've tried, and I tried them all, but I'm using SillyTavern and the simple-proxy-for-tavern which combined with a little prompting liberates basically any model.
  • The best 13B model for rolepay?
    2 projects | /r/LocalLLaMA | 28 Jun 2023
    Why reinvent the wheel? Just use SillyTavern, ideally with the simple-proxy-for-tavern. That does it all, and more.
  • airoboros gpt4 v1.2
    3 projects | /r/LocalLLaMA | 16 Jun 2023
    I tested this today in an hours-long direct roleplay comparison between q3_K_M quants of TheBloke/airoboros-65B-gpt4-1.2-GGML and TheBloke/guanaco-65B-GGML, using koboldcpp as backend together with simple-proxy-for-tavern and SillyTavern as frontend.
  • What are you using for RP?
    3 projects | /r/LocalLLaMA | 14 Jun 2023
    I'm using SillyTavern frontend and simple-proxy-for-tavern with koboldcpp backend.
  • KoboldCPP Updated to Support K-Quants, new bonus CUDA build.
    4 projects | /r/LocalLLaMA | 13 Jun 2023
    I'm using SillyTavern frontend and simple-proxy-for-tavern with koboldcpp. Not sure which of these has solved the prompt-reprocessing problem, but I no longer have these slowdowns.
  • What are your favorite LLMs?
    4 projects | /r/LocalLLaMA | 8 Jun 2023
    WizardLM 30B V1.0 is not only smarter and follows instructions better than the others, it's even uncensored when used with an uncensoring character card (I use SillyTavern as my GUI/frontend) - more so than any other model I tested. Probably because it follows instructions so well, thus roleplaying an uncensored character properly (and not breaking character or going "as an AI" even once during my tests).
  • Potato's brain guide to installing and reopening SillyTavern for Mac
    1 project | /r/VenusAI_Official | 6 Jun 2023
    curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")" [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" nvm install node git clone -b dev https://github.com/Cohee1207/SillyTavern && cd SillyTavern npm i && node server.js
  • I've found a solution to Poe API error
    1 project | /r/SillyTavernAI | 3 Jun 2023
    For Android (Termux users): 1. apt update 2. apt upgrade 3. Type "y" to everything and hit enter 4. pkg install git 5. git clone -b dev https://github.com/Cohee1207/SillyTavern 6. cd SillyTavern 7. pkg install nodejs 8. npm install 9. bash start.sh

What are some alternatives?

When comparing GPTQ-for-LLaMa and SillyTavern you can also consider the following projects:

exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.

koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI

TavernAI - TavernAI for nerds [Moved to: https://github.com/Cohee1207/SillyTavern]

langflow - ⛓️ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

character-editor - Create, edit and convert AI character files for CharacterAI, Pygmalion, Text Generation, KoboldAI and TavernAI

one-click-installers - Simplified installers for oobabooga/text-generation-webui.

simple-proxy-for-tavern

private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks

ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.