localGPT
private-gpt
localGPT | private-gpt | |
---|---|---|
29 | 131 | |
19,193 | 51,882 | |
- | 2.6% | |
8.6 | 9.2 | |
2 days ago | 4 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
localGPT
-
Show HN: IncarnaMind-Chat with your multiple docs using LLMs
I think local LLMs are great for tinkerers, and with quantization can run on most modern PCs. I am not comfortable sending over my personal data over to OpenAI/Anthropic, so I've been playing around with https://github.com/PromtEngineer/localGPT/, GPT4All, etc. which keep the data all local.
Sliding window chunking, RAG, etc. seem more sophisticated than the other document LLM tools, so I would love to try this out if you ever add the ability to run LLMs locally!
- FLaNK Stack Weekly for 21 August 2023
- PromtEngineer/localGPT: Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents?
localGPT can parse PDF into embeddings, see <https://github.com/PromtEngineer/localGPT>.
-
Which platform or model to use for fine tuning pdf files ?
This is going so fast that it feels like a new thing pops up every day. LocalGPT seems to have gotten a lot of traction though: https://github.com/PromtEngineer/localGPT
- Any successful guides on scanning internal pages and build a virtual assistant using LLAMA?
-
CUDA Out of memory with Nvidia A2 need help
i am currently trying to use localGPT (https://github.com/PromtEngineer/localGPT) for a project and i encountered a problem.
-
Using Local LLMs for things besides chat?
I tinker a lot with electronics. I have put datasheets for components, documentation for development boards, documentation for software libraries, etc into a database with localGPT.
-
Question regarding model compatibility for Alpaca Turbo
There are a bunch of other methods to improve quality and performance like tree-of-thought-llm, connecting a LLM to a database or have it review its own output.
-
Tools for ingesting .pdf files locally for training/fine-tuning?
Check out local gpt on git hub. I tried but it had slow response for me. Other developers are fine. https://github.com/PromtEngineer/localGPT
private-gpt
-
Ask HN: Has Anyone Trained a personal LLM using their personal notes?
PrivateGPT is a nice tool for this. It's not exactly what you're asking for, but it gets part of the way there.
https://github.com/zylon-ai/private-gpt
-
PrivateGPT exploring the Documentation
Further details available at: https://docs.privategpt.dev/api-reference/api-reference/ingestion
- Show HN: I made an app to use local AI as daily driver
-
privateGPT VS quivr - a user suggested alternative
2 projects | 12 Jan 2024
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
Run https://github.com/imartinez/privateGPT
Then
make ingest /path/to/folder/with/files
Then chat to the LLM.
Done.
Docs: https://docs.privategpt.dev/overview/welcome/quickstart
-
Mozilla "MemoryCache" Local AI
PrivateGPT repository in case anyone's interested: https://github.com/imartinez/privateGPT . It doesn't seem to be linked from their official website.
-
What Is Retrieval-Augmented Generation a.k.a. RAG
I’m preparing a small internal tool for my work to search documents and provide answers (with references), I’m thinking of using GPT4All [0], Danswer [1] and/or privateGPT [2].
The RAG technique is very close to what I have in mind, but I don’t want the LLM to “hallucinate” and generate answers on its own by synthesizing the source documents. As stated by many others, we’re living in interesting times.
[0] https://gpt4all.io/index.html
[1] https://www.danswer.ai/
[2] https://github.com/imartinez/privateGPT
- LM Studio – Discover, download, and run local LLMs
-
Ask HN: Local LLM Recommendation?
https://www.reddit.com/r/LocalLLaMA/comments/14niv66/using_a...
https://github.com/imartinez/privateGPT
-
Run ChatGPT-like LLMs on your laptop in 3 lines of code
I've been playing around with https://github.com/imartinez/privateGPT and https://github.com/simonw/llm and wanted to create a simple Python package that made it easier to run ChatGPT-like LLMs on your own machine, use them with non-public data, and integrate them into practical applications.
This resulted in Python package I call OnPrem.LLM.
In the documentation, there are examples for how to use it for information extraction, text generation, retrieval-augmented generation (i.e., chatting with documents on your computer), and text-to-code generation: https://amaiya.github.io/onprem/
Enjoy!
What are some alternatives?
privateGPT - Interact with your documents using the power of GPT, 100% privately, no data leaks [Moved to: https://github.com/zylon-ai/private-gpt]
gpt4all - gpt4all: run open-source LLMs anywhere
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
llama_index - LlamaIndex is a data framework for your LLM applications
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
quivr - Your GenAI Second Brain 🧠 A personal productivity assistant (RAG) ⚡️🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, Groq that you can share with users ! Local & Private alternative to OpenAI GPTs & ChatGPT powered by retrieval-augmented generation.
llama.cpp - LLM inference in C/C++