BrainChulo
long_term_memory
BrainChulo | long_term_memory | |
---|---|---|
10 | 12 | |
140 | 300 | |
0.7% | - | |
9.0 | 9.3 | |
7 months ago | 9 months ago | |
Python | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
BrainChulo
-
Alternative to LangChain for open LLMs?
On BrainChulo, we’re going 100% guidance mode, see for instance an implementation of Chain of Thoughts on top of a thin guidance wrapper: https://github.com/ChuloAI/BrainChulo/blob/main/app/guidance_tooling/guidance_agent/agent.py
-
Running local LLM for info retrieval of technical documents
Awesome resource! If I may suggest that you'd add one, some friends and I are working on data retrieval with llm project as well, with our differentiating marker being that we are trying to implement guidance in order to improve the agent efficiency. If you guys wanna take a look :) https://github.com/ChuloAI/BrainChulo
- LlamaCPP and LangChain Agent Quality
- Training a 13B LLaMA on information from documents.
-
Chat with Documents using Open source LLMs
Plug: https://github.com/iGavroche/BrainChulo - BrainChulo currently works on top of Ooba but uses its own UI interface. Its first goal is to provide a production-level way to do Retrieveal Augmentation on Open Source LLMs via vector stores and good prompt engineering.
-
What features would everyone like to see in oog?
Regarding this, I've joined a project that is doing some nice progress on this front. Still WIP but we're getting there, checkout BrainChulo :)
-
7B models use with Langchainn for Chatbox importing of txt or pdf's
This is exactly what BrainChulo aims to do. You should check it out: https://github.com/CryptoRUSHGav/BrainChulo/ and feel free to drop on the discord to give us your feedback, your use-case, or if you need help getting started.
- [Local Llama] Aggiunta di memoria a lungo termine a LLM personalizzati: domiamo Vicuna insieme!
-
adding models to oobabooga
The download script is broken. I posted a working version on my repo: https://github.com/CryptoRUSHGav/BrainChulo
-
Adding Long-Term Memory to Custom LLMs: Let's Tame Vicuna Together!
I'm hoping that many of you brilliant people can join me in our common quest to add long-term memory to our favorite camelid, Vicuna. The repository is called BrainChulo, and it's just waiting for your contributions.
long_term_memory
-
Looking for the long-term memory extension.
what you're probably thinking of is this: https://github.com/wawawario2/long_term_memory
-
Instruct Models not remembering previous responses?
I do believe there are some proposed solutions in order to retain memories such as LTM or Long Term Memory. I think there are some extensions for ooba's webui that implements that. see: https://github.com/wawawario2/long_term_memory
- Long-term memory (LTM) extension for oobabooga's Text Generation Web UI
-
I just made an easy GUI for changing start Parameters
If I understand correctly, one of the extensions you have enabled as an option for the gui to be able to select is the long_term_memory extension. That extension enables people to store in their conversations with the model into a long-term memory upon the next chat. I didn't know if you had a method that interfaced with that so that you could enable the long-term memory read or not. Here is where it explains how long_term_memory works: https://github.com/wawawario2/long_term_memory
-
Is there a way to feed in documents similar to the Llama Index?
I saw [this](https://github.com/wawawario2/long_term_memory) as a possible option, but now that I know about SuperBooga I am now not sure which would be better for my purposes.
- Suggestions for long term memories
- CarterAI's StableVicuna 13B with RHLF training. Now available quantised in GGML and GPTQ.
-
Working on long term memories for the AI
I thought about Langchain, but it already does the functionality that we have built in which is.. just feed the context back to the model. What I am aiming to do.. is inject relevant memories with the combined text and mix it with the prompt. So the responses will be very fine tuned to the prompt itself based on the context/and memory! so when you say prompt something and it has a memory.. it will take the memory + prompt and the context of the conversation and give a very fine tuned response. https://github.com/wawawario2/long_term_memory
-
Adding Long-Term Memory to Custom LLMs: Let's Tame Vicuna Together!
Is this going to function at all similarly to https://github.com/wawawario2/long_term_memory ?
-
Advanced character documentation?
There is the long term memory extension that might assist with this. I found out about it though and haven't gotten it to load successfully yet.. https://github.com/wawawario2/long_term_memory
What are some alternatives?
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
StartUI-oobabooga-webui - WebUI StartGUI is a Python graphical user interface (GUI) written with PyQT5, that allows users to configure settings and start the oobabooga web user interface (WebUI). It provides a convenient way to adjust various parameters and launch the WebUI with the desired settings.
guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]
SillyTavern-Extras - Extensions API for SillyTavern.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
gpt-llama.cpp - A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI.
outlines - Structured Text Generation
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
text-generation-webui-extensions
ChatALL - Concurrently chat with ChatGPT, Bing Chat, Bard, Alpaca, Vicuna, Claude, ChatGLM, MOSS, 讯飞星火, 文心一言 and more, discover the best answers
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.