SillyTavern
FastChat
SillyTavern | FastChat | |
---|---|---|
76 | 83 | |
5,930 | 34,277 | |
8.5% | 3.6% | |
10.0 | 9.6 | |
5 days ago | 1 day ago | |
JavaScript | Python | |
GNU Affero General Public License v3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SillyTavern
-
Claude 3 beats GPT-4 on Aider's code editing benchmark β aider
Right, but it's certainly easier for people who might not even know what "API" stands for, and that's quite nifty. As far as self-hosted frontends go, I can personally recommend SillyTavern[1] in the browser, ChatterUI[2] on mobile, and ShellGPT[3] for CLI. LobeChat looks pretty cool, though! I'll definitely check it out.
[1] https://github.com/SillyTavern/SillyTavern
[2] https://github.com/Vali-98/ChatterUI
[3] https://github.com/TheR1D/shell_gpt
- FLaNK AI for 11 March 2024
- Show HN: I made an app to use local AI as daily driver
-
Group chats vs online defined characters, token efficiency question
I don't think there is any enumeration for {{char}} macros. Here is some good discussion on the subject.
- SillyTavern 1.11.0 has been released
-
Is possible to run local voice chat agent? If yes what GPU do i Need with 500β¬ budget?
As for SillyTavern, you need the main SillyTavern frontend and SillyTavern-extras (for TTS, STT, etc.) They're pretty easy to install. SillyTavern connects to oobabooga and SillyTavern-extras via API.
-
What do you use to run your models?
Finally, no matter what backend I use, I need it to be compatible with my power-user frontend, SillyTavern. That way I always use the same UI, with the characters I created and extensions I want, e. g. web search, XTTS text-to-speech and Whisper speech recognition for real-time voice chat - and all of that local!
- SillyTavern 1.10.10 has been released
- LM Studio β Discover, download, and run local LLMs
-
πΊπ¦ββ¬ LLM Comparison/Test: Mistral 7B Updates (OpenHermes 2.5, OpenChat 3.5, Nous Capybara 1.9)
SillyTavern v1.10.5 frontend (not the latest as I don't want to upgrade mid-test)
FastChat
-
GPT4.5 or GPT5 being tested on LMSYS?
gpt2-chatbot isn't the only "mystery model" on LMSYS. Another is "deluxe-chat".
When asked about it in October last year, LMSYS replied [0] "It is an experiment we are running currently. More details will be revealed later"
One distinguishing feature of "deluxe-chat": although it gives high quality answers, it is very slow, so slow that the arena displays a warning whenever it is invoked
[0] https://github.com/lm-sys/FastChat/issues/2527
-
LLMs on your local Computer (Part 1)
FastChat
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
- ChatGPT for Teams
- FastChat: An open platform for training and serving large language models
-
LM Studio β Discover, download, and run local LLMs
How does it compare with something like FastChat? https://github.com/lm-sys/FastChat
Feature set seems like a decent amount of overlap. One limitation of FastChat, as far as I can tell, is that one is limited to the models that FastChat supports (though I think it would be minor to modify it to support arbitrary models?)
-
Video-LLaVA
Looks like the Vicuna repo is Apache 2.0 also[1].
What's the interpretation of copyright law that would prevent the code being Apache 2.0 based on the source of the fine-tuning dataset?
[1] https://github.com/lm-sys/FastChat
-
π₯π Top 10 Open-Source Must-Have Tools for Crafting Your Own Chatbot π€π¬
Check how to start with FastChat. Support FastChat on GitHub β
-
Show HN: ChatAPI β PWA to Use ChatGPT by API Build with Alpine.js
For something a little heavier but much more robust in terms of features/functionality I've been enjoying FastChat: https://github.com/lm-sys/FastChat
It allows you to plug in different backends so that you can use OpenAI compatible clients with various LLM's, selfhosted or otherwise.
What are some alternatives?
TavernAI - TavernAI for nerds [Moved to: https://github.com/Cohee1207/SillyTavern]
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
character-editor - Create, edit and convert AI character files for CharacterAI, Pygmalion, Text Generation, KoboldAI and TavernAI
llama.cpp - LLM inference in C/C++
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
gpt4all - gpt4all: run open-source LLMs anywhere
SillyTavern-extras - Extensions API for SillyTavern [Moved to: https://github.com/SillyTavern/SillyTavern-extras]
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
SillyTavern-Extras - Extensions API for SillyTavern.
llama-cpp-python - Python bindings for llama.cpp