gpt4all
LocalAI
Our great sponsors
gpt4all | LocalAI | |
---|---|---|
139 | 82 | |
64,046 | 19,593 | |
3.6% | 12.9% | |
9.8 | 9.9 | |
4 days ago | 3 days ago | |
C++ | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt4all
- Show HN: I made an app to use local AI as daily driver
-
Ollama Python and JavaScript Libraries
I don’t know if Ollama can do this but https://gpt4all.io/ can.
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
Gpt4all is a local desktop app with a Python API that can be trained on your documents: https://gpt4all.io/
-
WyGPT: Minimal mature GPT model in C++
The readme page is cryptic. What does 'mature' mean in this context? What is the sample text a continuation of?
Hving a gif the thing in use would be great, similar to the gpt4all readme page. (https://github.com/nomic-ai/gpt4all)
-
LibreChat
Check https://github.com/nomic-ai/gpt4all instead.
-
OpenAI Negotiations to Reinstate Altman Hit Snag over Board Role
"I ran performance tests on two systems, here's the results of system 1, and heres the results of system 2. Summarize the results, and build a markdown table containing x,y,z rows."
"extract the reusable functions out of this bash script"
"write me a cfssl command to generate a intermediate CA"
"What is the regex for _____"
"Here are my accomplishments over the last 6 months, summarize them into a 1 page performance report."
etc etc etc
If you're not using GPT4 or some LLM as part of your daily flow you're working too hard.
Get GPT4All (https://gpt4all.io), log into OpenAI, drop $20 on your account, get a API key, and start using GPT4.
-
Darbe uzdraude naudotis CHATGPT: ar cia normalu?
offline versija, nors ir ne tokia pažengus - https://github.com/nomic-ai/gpt4all ; https://gpt4all.io/index.html
- GPT4All: An ecosystem of open-source on-edge large language models - by Nomic AI
-
Why use OpenAI's ChatGPT3.5 online service, if you can instead host your own local llama?
Take a look at https://gpt4all.io, their docs are pretty awesome
-
Ask HN: Are you using a local LLM? If yes, what for?
I run one. I built an iMessage-like frontend to it using plain JS and a Python websocket backend. I mostly just use it for curiosity and playing with different prompts. I only have 16GB of RAM to dedicate to it, so I use an 8B parameter model which is enough for fun and chitchat, but I don't find it good enough to replace ChatGPT.
https://github.com/nomic-ai/gpt4all
LocalAI
- Drop-In Replacement for ChatGPT API
- Voxos.ai – An Open-Source Desktop Voice Assistant
- Ask HN: Set Up Local LLM
- FLaNK Stack Weekly 11 Dec 2023
- Is there any open source app to load a model and expose API like OpenAI?
-
What do you use to run your models?
If you're running this as a server, I would recommend LocalAI https://github.com/mudler/LocalAI
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
LocalAI can do that: https://github.com/mudler/LocalAI
https://localai.io/features/openai-functions/
-
"ChatGPT romanesc"
De inspirație, LocalAI, un replacement la OpenAI. E deja hot pe GitHub.
-
Local LLM's to run on old iMac / Hardware
Your hardware should be fine for inferencing, as long as you don't bother trying to get the GPU working.
My $0.02 would be to try getting LocalAI running on your machine with OpenCL/CLBlas acceleration for your CPU. If you're running other things, you could limit the inferencing process to 2 or 3 threads. That should get it working; I've been able to inference even 13b models on cheap Rockchip SOCs. Your CPU should be fine, even if it's a little outdated.
LocalAI: https://github.com/mudler/LocalAI
Some decent models to start with:
TinyLlama (extremely small/fast): https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGU...
Dolphin Mistral (larger size, better responses: https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF
-
Retrieval Augmented Generation in Go
Neither of this really requires OpenAI. You can do it with locally-running models via something like https://github.com/mudler/LocalAI
What are some alternatives?
llama.cpp - LLM inference in C/C++
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
llama-cpp-python - Python bindings for llama.cpp
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
localGPT - Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.