TavernAI
rwkv.cpp
TavernAI | rwkv.cpp | |
---|---|---|
17 | 12 | |
76 | 1,111 | |
- | 2.6% | |
10.0 | 6.8 | |
about 1 year ago | 29 days ago | |
JavaScript | C++ | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TavernAI
-
Trying to download SillyTavern on Termux: Username for 'https://github.com":
I keep getting this last code after I input "git clone https://github.com/SillyLossy/TavernAI" what am I supposed to do..?
- How to get and use the Poe API/p-b cookie for SillyTavern through Kiwi Browser, Another 10 Step Android Guide
-
Dark future of AI generated girls
It's already the case with games like Silly Tavern https://github.com/SillyLossy/TavernAI and AI girls that are created on https://www.characterhub.org for AI personalities. Basically 4chan people use open source AI models like meta's Llama.cpp to create horny personalities to interact with, and use local models with SD to create big boobed waifus.
- 玩ai角色扮演导了一下午,一步步道德绑架纯洁少女和你做爱
-
SillyTavern Android, 10 Quick Step Guide.
1) Download Termux scroll down and look for download apk, install it. 2) Type in or copy and paste and hit enter apt update Enter, Press y to anything that comes up 3) Type in or copy and paste and hit enter apt update Enter, Press y to anything that comes up 4) Type in or copy and paste and hit enter pkg install git 5) Type in or copy and paste and hit enter git clone https://github.com/SillyLossy/TavernAI 6) Optional to get the most updated stuff add the dev branch if you just want it basic just skip this step Type in or copy and paste and hit enter git clone -b dev https://github.com/SillyLossy/TavernAI 7) Type in or copy and paste and hit enter cd SillyTavern The caps and lowercase HAS to specifically be this way. If you get green /SillyTavern you did it correctly. 8) Type in or copy and paste and hit enter pkg install nodejs 9) Type in or copy and paste and hit enter npm install 10) Type in or copy and paste and hit enter node server.js It should open your browser. You have correctly installed SillyTavern onto Termux. Under the plug you can click this link https://platform.openai.com/account/api-keys sign in and copy paste the link/api key
-
Comparing models: GPT4xAlpaca, Vicuna, and OASST
But if you're really serious about chatting, the best experience is definitely with TavernAI. It's just a frontend so you still run the AI using oobabooga's textgen or one of the *cpp engines, but because it's entirely focused on chatting, its chat capabilities are much more advanced.
-
KoboldCpp - Combining all the various ggml.cpp CPU LLM inference projects with a WebUI and API (formerly llamacpp-for-kobold)
Have you tried to talk to both at the same time? With TavernAI group chats are actually possible. The current version isn't compatible with koboldcpp, but the dev version has a fix, and I'm just getting started playing around with it.
-
What.
That looks like Cohee's TavernAI fork! https://github.com/SillyLossy/TavernAI
- Creating characters for TavenAI
- Jesus fucking christ what happened here?
rwkv.cpp
-
Eagle 7B: Soaring past Transformers
There's https://github.com/saharNooby/rwkv.cpp, which related-ish[0] to ggml/llama.cpp
[0]: https://github.com/ggerganov/llama.cpp/issues/846
- People who've used RWKV, whats your wishlist for it?
-
The Eleuther AI Mafia
Quantisation thankfully is applicable to RWKV as much as transformers. Most notably in our RWKV.cpp community project: https://github.com/saharNooby/rwkv.cpp
Tooling/Ecosystem is something that I am actively working on as there is still a gap to transformers level of tooling. But i'm glad that there is a noticeable difference!
And yes! experiments are important, to ensure improvements in the architecture. Even if "Linear Transformers" replaces "Transformers". Alternatives should always be explored, to learn from such trade-offs to the benefit of the ecosystem
(This was lightly covered in the podcast, where I share IMO that we should have more research into text based diffusion networks)
- Tiny models for contextually coherent conversations?
-
New model: RWKV-4-Raven-7B-v12-Eng49%-Chn49%-Jpn1%-Other1%-20230530-ctx8192.pth
Q8_0 models: only for https://github.com/saharNooby/rwkv.cpp (fast CPU).
- [R] RWKV: Reinventing RNNs for the Transformer Era
-
4096 Context length (and beyond)
There's https://github.com/saharNooby/rwkv.cpp which seems to work, and might be compatible with text-generation-webui.
-
The Coming of Local LLMs
Also worth checking out https://github.com/saharNooby/rwkv.cpp which is based on Georgi's library and offers support for the RWKV family of models which are Apache-2.0 licensed.
-
KoboldCpp - Combining all the various ggml.cpp CPU LLM inference projects with a WebUI and API (formerly llamacpp-for-kobold)
I'm most interested in that last one. I think I heard the RWKV models are very fast, don't need much Ram, and can have huge context tokens, so maybe their 14b can work for me. I wasn't sure how ready for use they were though, but looking more into it, stuff like rwkv.cpp and ChatRWKV and a whole lot of other community projects are mentioned on their github.
- rwkv.cpp: FP16 & INT4 inference on CPU for RWKV language model (r/MachineLearning)
What are some alternatives?
SillyTavern - LLM Frontend for Power Users.
llama.cpp - LLM inference in C/C++
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
TavernAI-extras - Extensions API for SillyTavern [Moved to: https://github.com/Cohee1207/SillyTavern-extras]
mpt-30B-inference - Run inference on MPT-30B using CPU
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
verbaflow - Neural Language Model for Go
Spermack
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM