SillyTavern
mlc-llm
SillyTavern | mlc-llm | |
---|---|---|
75 | 89 | |
677 | 17,150 | |
- | 4.3% | |
10.0 | 9.9 | |
12 months ago | 6 days ago | |
JavaScript | Python | |
GNU Affero General Public License v3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SillyTavern
-
Help😢
Go to Termix and click Exit. Then go to Termux and code 1. Apk update 2. Apk upgrade 3. git clone https://github.com/Cohee1207/SillyTavern 4. cd SillyTavern 5. Install nodejs 6. Npm install 7. Node server
-
Oogabooga and llama.cpp in longer conversations answers take forever.....
If you want the best roleplaying experience, I can only recommend SillyTavern with SillyTavern/SillyTavern-extras. The extras include summarization and ChromaDB, both helping to get longer and more coherent chats.
-
koboldcpp-1.33 Ultimate Edition released!
Really? Then we definitely have different experiences (or different ways to interact) with Guanaco. It's been the most unrestricted model I've tried, and I tried them all, but I'm using SillyTavern and the simple-proxy-for-tavern which combined with a little prompting liberates basically any model.
-
The best 13B model for rolepay?
Why reinvent the wheel? Just use SillyTavern, ideally with the simple-proxy-for-tavern. That does it all, and more.
-
airoboros gpt4 v1.2
I tested this today in an hours-long direct roleplay comparison between q3_K_M quants of TheBloke/airoboros-65B-gpt4-1.2-GGML and TheBloke/guanaco-65B-GGML, using koboldcpp as backend together with simple-proxy-for-tavern and SillyTavern as frontend.
-
What are you using for RP?
I'm using SillyTavern frontend and simple-proxy-for-tavern with koboldcpp backend.
-
KoboldCPP Updated to Support K-Quants, new bonus CUDA build.
I'm using SillyTavern frontend and simple-proxy-for-tavern with koboldcpp. Not sure which of these has solved the prompt-reprocessing problem, but I no longer have these slowdowns.
-
What are your favorite LLMs?
WizardLM 30B V1.0 is not only smarter and follows instructions better than the others, it's even uncensored when used with an uncensoring character card (I use SillyTavern as my GUI/frontend) - more so than any other model I tested. Probably because it follows instructions so well, thus roleplaying an uncensored character properly (and not breaking character or going "as an AI" even once during my tests).
-
Potato's brain guide to installing and reopening SillyTavern for Mac
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")" [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" nvm install node git clone -b dev https://github.com/Cohee1207/SillyTavern && cd SillyTavern npm i && node server.js
-
I've found a solution to Poe API error
For Android (Termux users): 1. apt update 2. apt upgrade 3. Type "y" to everything and hit enter 4. pkg install git 5. git clone -b dev https://github.com/Cohee1207/SillyTavern 6. cd SillyTavern 7. pkg install nodejs 8. npm install 9. bash start.sh
mlc-llm
- FLaNK 04 March 2024
-
Ai on a android phone?
This one uses gpu, it doesn't support Mistral yet: https://github.com/mlc-ai/mlc-llm
-
MLC vs llama.cpp
I have tried running mistral 7B with MLC on my m1 metal. And it kept crushing (git issue with description). Memory inefficiency problems.
-
[Project] Scaling LLama2 70B with Multi NVIDIA and AMD GPUs under 3k budget
Project: https://github.com/mlc-ai/mlc-llm
- Scaling LLama2-70B with Multi Nvidia/AMD GPU
-
AMD May Get Across the CUDA Moat
For LLM inference, a shoutout to MLC LLM, which runs LLM models on basically any API that's widely available: https://github.com/mlc-ai/mlc-llm
-
ROCm Is AMD's #1 Priority, Executive Says
One of your problems might be that gfx1032 is not supported by AMD's ROCm packages, which has a laughably short list of supported hardware: https://rocm.docs.amd.com/en/latest/release/gpu_os_support.h...
The normal workaround is to assign the closest architecture, eg gfx1030, so `HSA_OVERRIDE_GFX_VERSION=10.3.0` might help
Also, it looks like some of your tested projects are OpenCL? For me, I do something like: `yay -S rocm-hip-sdk rocm-ml-sdk rocm-opencl-sdk` to cover all the bases.
My recent interest has been LLMs and this is my general step by step for those (llama.cpp, exllama) for those interested: https://llm-tracker.info/books/howto-guides/page/amd-gpus
I didn't port the docs back in, but also here's a step-by-step w/ my adventures getting TVM/MLC working w/ an APU: https://github.com/mlc-ai/mlc-llm/issues/787
From my experience, ROCm is improving, but there's a good reason that Nvidia has 90% market share even at big price premiums.
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
Maybe they're talking about https://github.com/mlc-ai/mlc-llm which is used for web-llm (https://github.com/mlc-ai/web-llm)? Seems to be using TVM.
-
Show HN: Fine-tune your own Llama 2 to replace GPT-3.5/4
you already have TVM for the cross platform stuff
see https://tvm.apache.org/docs/how_to/deploy/android.html
or https://octoml.ai/blog/using-swift-and-apache-tvm-to-develop...
or https://github.com/mlc-ai/mlc-llm
- Ask HN: Are you training and running custom LLMs and how are you doing it?
What are some alternatives?
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
llama.cpp - LLM inference in C/C++
TavernAI - TavernAI for nerds [Moved to: https://github.com/Cohee1207/SillyTavern]
ggml - Tensor library for machine learning
langflow - ⛓️ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
character-editor - Create, edit and convert AI character files for CharacterAI, Pygmalion, Text Generation, KoboldAI and TavernAI
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
simple-proxy-for-tavern
llama-cpp-python - Python bindings for llama.cpp
ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.