LocalAI
localGPT
LocalAI | localGPT | |
---|---|---|
83 | 29 | |
19,862 | 19,193 | |
8.3% | - | |
9.9 | 8.6 | |
4 days ago | 2 days ago | |
C++ | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LocalAI
- LocalAI: Self-hosted OpenAI alternative reaches 2.14.0
- Drop-In Replacement for ChatGPT API
- Voxos.ai – An Open-Source Desktop Voice Assistant
- Ask HN: Set Up Local LLM
- FLaNK Stack Weekly 11 Dec 2023
- Is there any open source app to load a model and expose API like OpenAI?
-
What do you use to run your models?
If you're running this as a server, I would recommend LocalAI https://github.com/mudler/LocalAI
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
LocalAI can do that: https://github.com/mudler/LocalAI
https://localai.io/features/openai-functions/
-
"ChatGPT romanesc"
De inspirație, LocalAI, un replacement la OpenAI. E deja hot pe GitHub.
-
Local LLM's to run on old iMac / Hardware
Your hardware should be fine for inferencing, as long as you don't bother trying to get the GPU working.
My $0.02 would be to try getting LocalAI running on your machine with OpenCL/CLBlas acceleration for your CPU. If you're running other things, you could limit the inferencing process to 2 or 3 threads. That should get it working; I've been able to inference even 13b models on cheap Rockchip SOCs. Your CPU should be fine, even if it's a little outdated.
LocalAI: https://github.com/mudler/LocalAI
Some decent models to start with:
TinyLlama (extremely small/fast): https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGU...
Dolphin Mistral (larger size, better responses: https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF
localGPT
-
Show HN: IncarnaMind-Chat with your multiple docs using LLMs
I think local LLMs are great for tinkerers, and with quantization can run on most modern PCs. I am not comfortable sending over my personal data over to OpenAI/Anthropic, so I've been playing around with https://github.com/PromtEngineer/localGPT/, GPT4All, etc. which keep the data all local.
Sliding window chunking, RAG, etc. seem more sophisticated than the other document LLM tools, so I would love to try this out if you ever add the ability to run LLMs locally!
- FLaNK Stack Weekly for 21 August 2023
- PromtEngineer/localGPT: Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents?
localGPT can parse PDF into embeddings, see <https://github.com/PromtEngineer/localGPT>.
-
Which platform or model to use for fine tuning pdf files ?
This is going so fast that it feels like a new thing pops up every day. LocalGPT seems to have gotten a lot of traction though: https://github.com/PromtEngineer/localGPT
- Any successful guides on scanning internal pages and build a virtual assistant using LLAMA?
-
CUDA Out of memory with Nvidia A2 need help
i am currently trying to use localGPT (https://github.com/PromtEngineer/localGPT) for a project and i encountered a problem.
-
Using Local LLMs for things besides chat?
I tinker a lot with electronics. I have put datasheets for components, documentation for development boards, documentation for software libraries, etc into a database with localGPT.
-
Question regarding model compatibility for Alpaca Turbo
There are a bunch of other methods to improve quality and performance like tree-of-thought-llm, connecting a LLM to a database or have it review its own output.
-
Tools for ingesting .pdf files locally for training/fine-tuning?
Check out local gpt on git hub. I tried but it had slow response for me. Other developers are fine. https://github.com/PromtEngineer/localGPT
What are some alternatives?
gpt4all - gpt4all: run open-source LLMs anywhere
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
privateGPT - Interact with your documents using the power of GPT, 100% privately, no data leaks [Moved to: https://github.com/zylon-ai/private-gpt]
llama-cpp-python - Python bindings for llama.cpp
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
llama_index - LlamaIndex is a data framework for your LLM applications
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
quivr - Your GenAI Second Brain 🧠 A personal productivity assistant (RAG) ⚡️🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, Groq that you can share with users ! Local & Private alternative to OpenAI GPTs & ChatGPT powered by retrieval-augmented generation.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/