BrainChulo
gpt-llama.cpp
BrainChulo | gpt-llama.cpp | |
---|---|---|
10 | 12 | |
140 | 587 | |
0.7% | - | |
9.0 | 8.2 | |
7 months ago | 11 months ago | |
Python | JavaScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
BrainChulo
-
Alternative to LangChain for open LLMs?
On BrainChulo, we’re going 100% guidance mode, see for instance an implementation of Chain of Thoughts on top of a thin guidance wrapper: https://github.com/ChuloAI/BrainChulo/blob/main/app/guidance_tooling/guidance_agent/agent.py
-
Running local LLM for info retrieval of technical documents
Awesome resource! If I may suggest that you'd add one, some friends and I are working on data retrieval with llm project as well, with our differentiating marker being that we are trying to implement guidance in order to improve the agent efficiency. If you guys wanna take a look :) https://github.com/ChuloAI/BrainChulo
- LlamaCPP and LangChain Agent Quality
- Training a 13B LLaMA on information from documents.
-
Chat with Documents using Open source LLMs
Plug: https://github.com/iGavroche/BrainChulo - BrainChulo currently works on top of Ooba but uses its own UI interface. Its first goal is to provide a production-level way to do Retrieveal Augmentation on Open Source LLMs via vector stores and good prompt engineering.
-
What features would everyone like to see in oog?
Regarding this, I've joined a project that is doing some nice progress on this front. Still WIP but we're getting there, checkout BrainChulo :)
-
7B models use with Langchainn for Chatbox importing of txt or pdf's
This is exactly what BrainChulo aims to do. You should check it out: https://github.com/CryptoRUSHGav/BrainChulo/ and feel free to drop on the discord to give us your feedback, your use-case, or if you need help getting started.
- [Local Llama] Aggiunta di memoria a lungo termine a LLM personalizzati: domiamo Vicuna insieme!
-
adding models to oobabooga
The download script is broken. I posted a working version on my repo: https://github.com/CryptoRUSHGav/BrainChulo
-
Adding Long-Term Memory to Custom LLMs: Let's Tame Vicuna Together!
I'm hoping that many of you brilliant people can join me in our common quest to add long-term memory to our favorite camelid, Vicuna. The repository is called BrainChulo, and it's just waiting for your contributions.
gpt-llama.cpp
-
Attempt to run Llama on a remote server with chatbot-ui
hi! I really like the solution https://github.com/keldenl/gpt-llama.cpp which helps to deploy https://github.com/mckaywrigley/chatbot-ui on the local model. I am running this together with Wizard7b or 13b locally and it works fine, but when I tried to upload to a remote server I met an error.
-
Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
sounds like you’re asking for exactly this? https://github.com/keldenl/gpt-llama.cpp
- LLaMA and AutoAPI?
-
New big update to GPTNicheFinder: better trends analysis and scoring system, cleaned up UI and verbose in the terminal for people who want to see what is going on and to verify the results
I salut you good sir. This is an amazing idea. I don't have time but it will be interesting idea to use this wrapper https://github.com/keldenl/gpt-llama.cpp which simulates GPT endpoint for local lama, so basically we can have amazing tool for completely free use. If somebody test it please let me know underneath my comment!
-
I build an AI powered writing tools, an AI co-author
I would gladly buy your product to run with a local model, like Vicuna ggml , also see https://github.com/keldenl/gpt-llama.cpp/
-
Serge... Just works
possible through fastllama in python or gpt-llama.cpp an API wrapper around llama.cpp
-
Embeddings?
https://github.com/keldenl/gpt-llama.cpp supports embeddings, and it even takes in openai type requests and returns openai compatible responses!
-
I built a completely Local AutoGPT with the help of GPT-llama running Vicuna-13B
https://github.com/keldenl/gpt-llama.cpp
- I build a completely Local and portable AutoGPT with the help of gpt-llama, running on Vicuna-13b
-
Adding Long-Term Memory to Custom LLMs: Let's Tame Vicuna Together!
There's a (kind of) working Auto-GPT solution that uses Vicuna https://github.com/keldenl/gpt-llama.cpp/blob/master/docs/Auto-GPT-setup-guide.md
What are some alternatives?
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
llama_index - LlamaIndex is a data framework for your LLM applications
guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]
Auto-LLM-Local - Created my own python script similar to AutoGPT where you supply a local llm model like alpaca13b (The main one I use), and the script can access the supplied tools to achieve your objective. Code fully works as far as I can tell. Takes me 5 minutes per chain on my slow laptop.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
long_term_memory - A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.
outlines - Structured Text Generation
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
ChatALL - Concurrently chat with ChatGPT, Bing Chat, Bard, Alpaca, Vicuna, Claude, ChatGLM, MOSS, 讯飞星火, 文心一言 and more, discover the best answers
langchain - 🦜🔗 Build context-aware reasoning applications