localGPT
BrainChulo
localGPT | BrainChulo | |
---|---|---|
29 | 10 | |
19,193 | 140 | |
- | 0.7% | |
8.6 | 9.0 | |
2 days ago | 7 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
localGPT
-
Show HN: IncarnaMind-Chat with your multiple docs using LLMs
I think local LLMs are great for tinkerers, and with quantization can run on most modern PCs. I am not comfortable sending over my personal data over to OpenAI/Anthropic, so I've been playing around with https://github.com/PromtEngineer/localGPT/, GPT4All, etc. which keep the data all local.
Sliding window chunking, RAG, etc. seem more sophisticated than the other document LLM tools, so I would love to try this out if you ever add the ability to run LLMs locally!
- FLaNK Stack Weekly for 21 August 2023
- PromtEngineer/localGPT: Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents?
localGPT can parse PDF into embeddings, see <https://github.com/PromtEngineer/localGPT>.
-
Which platform or model to use for fine tuning pdf files ?
This is going so fast that it feels like a new thing pops up every day. LocalGPT seems to have gotten a lot of traction though: https://github.com/PromtEngineer/localGPT
- Any successful guides on scanning internal pages and build a virtual assistant using LLAMA?
-
CUDA Out of memory with Nvidia A2 need help
i am currently trying to use localGPT (https://github.com/PromtEngineer/localGPT) for a project and i encountered a problem.
-
Using Local LLMs for things besides chat?
I tinker a lot with electronics. I have put datasheets for components, documentation for development boards, documentation for software libraries, etc into a database with localGPT.
-
Question regarding model compatibility for Alpaca Turbo
There are a bunch of other methods to improve quality and performance like tree-of-thought-llm, connecting a LLM to a database or have it review its own output.
-
Tools for ingesting .pdf files locally for training/fine-tuning?
Check out local gpt on git hub. I tried but it had slow response for me. Other developers are fine. https://github.com/PromtEngineer/localGPT
BrainChulo
-
Alternative to LangChain for open LLMs?
On BrainChulo, we’re going 100% guidance mode, see for instance an implementation of Chain of Thoughts on top of a thin guidance wrapper: https://github.com/ChuloAI/BrainChulo/blob/main/app/guidance_tooling/guidance_agent/agent.py
-
Running local LLM for info retrieval of technical documents
Awesome resource! If I may suggest that you'd add one, some friends and I are working on data retrieval with llm project as well, with our differentiating marker being that we are trying to implement guidance in order to improve the agent efficiency. If you guys wanna take a look :) https://github.com/ChuloAI/BrainChulo
- LlamaCPP and LangChain Agent Quality
- Training a 13B LLaMA on information from documents.
-
Chat with Documents using Open source LLMs
Plug: https://github.com/iGavroche/BrainChulo - BrainChulo currently works on top of Ooba but uses its own UI interface. Its first goal is to provide a production-level way to do Retrieveal Augmentation on Open Source LLMs via vector stores and good prompt engineering.
-
What features would everyone like to see in oog?
Regarding this, I've joined a project that is doing some nice progress on this front. Still WIP but we're getting there, checkout BrainChulo :)
-
7B models use with Langchainn for Chatbox importing of txt or pdf's
This is exactly what BrainChulo aims to do. You should check it out: https://github.com/CryptoRUSHGav/BrainChulo/ and feel free to drop on the discord to give us your feedback, your use-case, or if you need help getting started.
- [Local Llama] Aggiunta di memoria a lungo termine a LLM personalizzati: domiamo Vicuna insieme!
-
adding models to oobabooga
The download script is broken. I posted a working version on my repo: https://github.com/CryptoRUSHGav/BrainChulo
-
Adding Long-Term Memory to Custom LLMs: Let's Tame Vicuna Together!
I'm hoping that many of you brilliant people can join me in our common quest to add long-term memory to our favorite camelid, Vicuna. The repository is called BrainChulo, and it's just waiting for your contributions.
What are some alternatives?
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
privateGPT - Interact with your documents using the power of GPT, 100% privately, no data leaks [Moved to: https://github.com/zylon-ai/private-gpt]
guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
outlines - Structured Text Generation
llama_index - LlamaIndex is a data framework for your LLM applications
long_term_memory - A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.
quivr - Your GenAI Second Brain 🧠 A personal productivity assistant (RAG) ⚡️🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, Groq that you can share with users ! Local & Private alternative to OpenAI GPTs & ChatGPT powered by retrieval-augmented generation.
gpt-llama.cpp - A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI.