SillyTavern-Extras
long_term_memory
SillyTavern-Extras | long_term_memory | |
---|---|---|
14 | 12 | |
485 | 302 | |
6.0% | - | |
9.6 | 9.3 | |
20 days ago | 9 months ago | |
Python | Python | |
GNU Affero General Public License v3.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SillyTavern-Extras
-
Is possible to run local voice chat agent? If yes what GPU do i Need with 500€ budget?
As for SillyTavern, you need the main SillyTavern frontend and SillyTavern-extras (for TTS, STT, etc.) They're pretty easy to install. SillyTavern connects to oobabooga and SillyTavern-extras via API.
- Image upload in ST
-
Poe Problem
Sure, best bet is to follow the instructions on the ST extra's github. It gives a good step by step guide to setting up and running all the addon's. Here is a link in case you need it. https://github.com/SillyTavern/SillyTavern-extras
-
What is the best text web ui currently?
oobazz + SillyTavern
-
Oogabooga and llama.cpp in longer conversations answers take forever.....
If you want the best roleplaying experience, I can only recommend SillyTavern with SillyTavern/SillyTavern-extras. The extras include summarization and ChromaDB, both helping to get longer and more coherent chats.
-
I finally got SillyTavern set up... now what? How do I set the scene? How do I build the world?
If you haven't already, I sugget you to instal SillyTavern Extra which you can add objective with task, character expression ( personally I generate expression with stable diffusion), and text to speech to diffrent character. You can also set a group scenario if you create group.
-
Looking for the long-term memory extension.
Instead, I use the summarize extension for SillyTavern, which is serviceable: https://github.com/SillyTavern/SillyTavern-extras
-
FINISHED IT!! my final tier list..
It doesn't have to be the end. Do what I'm doing and get SillyTavern, use their extension (for expressive sprites), open an OAI account, create some Umineko bots, import your own sprites, and start chatting with them. There's been a lot of advancements in AI visual novel tech. Here's me and the UI's creator in a 5-way group chat with the Quintessential Quintuplets.
- Local LLMs: After Novelty Wanes
-
New to SillyTavern, I have a few question, sorry if they are silly. Pun intended.
The sillytavern-extras: https://github.com/SillyTavern/SillyTavern-extras Classify-extension handles expression, provided you have the pictures. The clever model mentioned in the readme can handle 28 different expressions. It evaluates *while generating* so it can change the pic of 'current speaker', so it'll change as speakers change. I've not spent much time with the group chat, so I never figured out how to do it like it was meant to. BTW, of extras, chromadb is cool, look into it.
long_term_memory
-
Looking for the long-term memory extension.
what you're probably thinking of is this: https://github.com/wawawario2/long_term_memory
-
Instruct Models not remembering previous responses?
I do believe there are some proposed solutions in order to retain memories such as LTM or Long Term Memory. I think there are some extensions for ooba's webui that implements that. see: https://github.com/wawawario2/long_term_memory
- Long-term memory (LTM) extension for oobabooga's Text Generation Web UI
-
I just made an easy GUI for changing start Parameters
If I understand correctly, one of the extensions you have enabled as an option for the gui to be able to select is the long_term_memory extension. That extension enables people to store in their conversations with the model into a long-term memory upon the next chat. I didn't know if you had a method that interfaced with that so that you could enable the long-term memory read or not. Here is where it explains how long_term_memory works: https://github.com/wawawario2/long_term_memory
-
Is there a way to feed in documents similar to the Llama Index?
I saw [this](https://github.com/wawawario2/long_term_memory) as a possible option, but now that I know about SuperBooga I am now not sure which would be better for my purposes.
- Suggestions for long term memories
- CarterAI's StableVicuna 13B with RHLF training. Now available quantised in GGML and GPTQ.
-
Working on long term memories for the AI
I thought about Langchain, but it already does the functionality that we have built in which is.. just feed the context back to the model. What I am aiming to do.. is inject relevant memories with the combined text and mix it with the prompt. So the responses will be very fine tuned to the prompt itself based on the context/and memory! so when you say prompt something and it has a memory.. it will take the memory + prompt and the context of the conversation and give a very fine tuned response. https://github.com/wawawario2/long_term_memory
-
Adding Long-Term Memory to Custom LLMs: Let's Tame Vicuna Together!
Is this going to function at all similarly to https://github.com/wawawario2/long_term_memory ?
-
Advanced character documentation?
There is the long term memory extension that might assist with this. I found out about it though and haven't gotten it to load successfully yet.. https://github.com/wawawario2/long_term_memory
What are some alternatives?
SillyTavern - LLM Frontend for Power Users.
StartUI-oobabooga-webui - WebUI StartGUI is a Python graphical user interface (GUI) written with PyQT5, that allows users to configure settings and start the oobabooga web user interface (WebUI). It provides a convenient way to adjust various parameters and launch the WebUI with the desired settings.
stable-diffusion-webui - Stable Diffusion web UI
gpt-llama.cpp - A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI.
lollms-webui - Lord of Large Language Models Web User Interface
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
simple-proxy-for-tavern
text-generation-webui-extensions
docker - Docker - the open-source application container engine
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]
annoy_ltm - annoy long term memory experiment for oobabooga/text-generation-webui