localGPT
llama_index
localGPT | llama_index | |
---|---|---|
29 | 75 | |
19,193 | 31,184 | |
- | 4.7% | |
8.6 | 10.0 | |
2 days ago | 5 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
localGPT
-
Show HN: IncarnaMind-Chat with your multiple docs using LLMs
I think local LLMs are great for tinkerers, and with quantization can run on most modern PCs. I am not comfortable sending over my personal data over to OpenAI/Anthropic, so I've been playing around with https://github.com/PromtEngineer/localGPT/, GPT4All, etc. which keep the data all local.
Sliding window chunking, RAG, etc. seem more sophisticated than the other document LLM tools, so I would love to try this out if you ever add the ability to run LLMs locally!
- FLaNK Stack Weekly for 21 August 2023
- PromtEngineer/localGPT: Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents?
localGPT can parse PDF into embeddings, see <https://github.com/PromtEngineer/localGPT>.
-
Which platform or model to use for fine tuning pdf files ?
This is going so fast that it feels like a new thing pops up every day. LocalGPT seems to have gotten a lot of traction though: https://github.com/PromtEngineer/localGPT
- Any successful guides on scanning internal pages and build a virtual assistant using LLAMA?
-
CUDA Out of memory with Nvidia A2 need help
i am currently trying to use localGPT (https://github.com/PromtEngineer/localGPT) for a project and i encountered a problem.
-
Using Local LLMs for things besides chat?
I tinker a lot with electronics. I have put datasheets for components, documentation for development boards, documentation for software libraries, etc into a database with localGPT.
-
Question regarding model compatibility for Alpaca Turbo
There are a bunch of other methods to improve quality and performance like tree-of-thought-llm, connecting a LLM to a database or have it review its own output.
-
Tools for ingesting .pdf files locally for training/fine-tuning?
Check out local gpt on git hub. I tried but it had slow response for me. Other developers are fine. https://github.com/PromtEngineer/localGPT
llama_index
- LlamaIndex: A data framework for your LLM applications
- FLaNK AI - 01 April 2024
-
Show HN: Ragdoll Studio (fka Arthas.AI) is the FOSS alternative to character.ai
For anyone curious llamaindex's "prompt mixins", they're actually dead simple: https://github.com/run-llama/llama_index/blob/8a8324008764a7... - and maybe no longer supported.
I basically reinvented this wheel in ragdoll but made it more dynamic: https://github.com/bennyschmidt/ragdoll/blob/master/src/util...
- LlamaIndex is a data framework for your LLM applications
- How to verify that a snippet of Python code doesn't access protected members
-
🆓 Local & Open Source AI: a kind ollama & LlamaIndex intro
Being able to plug third party frameworks (Langchain, LlamaIndex) so you can build complex projects
-
I made an app that runs Mistral 7B 0.2 LLM locally on iPhone Pros
Mistral Instruct does use a system prompt.
You can see the raw format here: https://www.promptingguide.ai/models/mistral-7b#chat-templat... and you can see how LllamaIndex uses it here (as an example): https://github.com/run-llama/llama_index/blob/1d861a9440cdc9...
-
Top 5 Vector Database Videos of 2023 🎥
Learn how to use Milvus as persistent vector storage with LlamaIndex in under 5 minutes.
-
What's going on in the Zilliz Universe? December 2023
▶️ Read Blog 📷 Watch Demo 🦙 Notebook using Pipelines inside LlamaIndex
-
First 15 Open Source Advent projects
15. LlamaIndex | Github | tutorial
What are some alternatives?
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
privateGPT - Interact with your documents using the power of GPT, 100% privately, no data leaks [Moved to: https://github.com/zylon-ai/private-gpt]
langchain - 🦜🔗 Build context-aware reasoning applications
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
chatgpt-retrieval-plugin - The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
quivr - Your GenAI Second Brain 🧠 A personal productivity assistant (RAG) ⚡️🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, Groq that you can share with users ! Local & Private alternative to OpenAI GPTs & ChatGPT powered by retrieval-augmented generation.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
gpt-llama.cpp - A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI.