go-llama.cpp
LLama.cpp golang bindings (by go-skynet)
LocalAI
:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities. (by mudler)
go-llama.cpp | LocalAI | |
---|---|---|
4 | 83 | |
577 | 20,346 | |
8.3% | 10.5% | |
7.9 | 9.9 | |
9 days ago | 3 days ago | |
C++ | C++ | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
go-llama.cpp
Posts with mentions or reviews of go-llama.cpp.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-19.
- Lokale LLM: Gibt es bereits welche für <= 4 GB vRAM?
-
LocalAI v1.19.0 - CUDA GPU support!
Full CUDA GPU offload support ( PR by mudler. Thanks to chnyda for handing over the GPU access, and lu-zero to help in debugging )
-
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
Go: go-skynet/go-llama.cpp
-
Redirecting Model Outputs from llama.cpp to a TXT File for Easier Tracking of Results?
I've had great success using go-llama.cpp to wrap llama in a much-friendlier language. The install process is a bit clunky- go does not like compiling submodules, so you need to use a replace within the go.mod file to point towards a local copy of go-llama.cpp that you've already compiled manually.
LocalAI
Posts with mentions or reviews of LocalAI.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-01-19.
- LocalAI: Self-hosted OpenAI alternative reaches 2.14.0
- Drop-In Replacement for ChatGPT API
- Voxos.ai – An Open-Source Desktop Voice Assistant
- Ask HN: Set Up Local LLM
- FLaNK Stack Weekly 11 Dec 2023
- Is there any open source app to load a model and expose API like OpenAI?
-
What do you use to run your models?
If you're running this as a server, I would recommend LocalAI https://github.com/mudler/LocalAI
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
LocalAI can do that: https://github.com/mudler/LocalAI
https://localai.io/features/openai-functions/
-
"ChatGPT romanesc"
De inspirație, LocalAI, un replacement la OpenAI. E deja hot pe GitHub.
-
Local LLM's to run on old iMac / Hardware
Your hardware should be fine for inferencing, as long as you don't bother trying to get the GPU working.
My $0.02 would be to try getting LocalAI running on your machine with OpenCL/CLBlas acceleration for your CPU. If you're running other things, you could limit the inferencing process to 2 or 3 threads. That should get it working; I've been able to inference even 13b models on cheap Rockchip SOCs. Your CPU should be fine, even if it's a little outdated.
LocalAI: https://github.com/mudler/LocalAI
Some decent models to start with:
TinyLlama (extremely small/fast): https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGU...
Dolphin Mistral (larger size, better responses: https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF