LLamaSharp
go-llama.cpp
LLamaSharp | go-llama.cpp | |
---|---|---|
3 | 4 | |
2,015 | 577 | |
14.0% | 8.3% | |
9.8 | 7.9 | |
3 days ago | 9 days ago | |
C# | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLamaSharp
-
This is getting really complicated.
For example, I have my own task and I need another tool, so I search and find what I need. https://github.com/SciSharp/LLamaSharp and this allows me to take the next step https://github.com/Xsanf/LLaMa_Unity . I can already run LLM on Unity. And this is already an opportunity to use it in games natively.
-
cannot for the life of me compile libllama.dll
I searched through GitHub and nothing comes up that is new. I wanted to run the model through the C# wrapper linked on LLaMASharp which requires compiling llama.cpp and extracting the libllama dll into the C# project files. When I build llama.cpp with OpenBLAS, everything shows up fine in the command line. Just as the link suggests I make sure to set DBUILD_SHARED_LIBS=ON when in CMake. However, the output in the Visual Studio Developer Command Line interface ignores the setup for libllama.dll in the CMakeFiles.txt entirely. The only dll to compile is llama.dll; I know this is a fairly technical question but does anyone know how to fix?
-
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
C#/.NET: SciSharp/LLamaSharp
go-llama.cpp
- Lokale LLM: Gibt es bereits welche für <= 4 GB vRAM?
-
LocalAI v1.19.0 - CUDA GPU support!
Full CUDA GPU offload support ( PR by mudler. Thanks to chnyda for handing over the GPU access, and lu-zero to help in debugging )
-
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
Go: go-skynet/go-llama.cpp
-
Redirecting Model Outputs from llama.cpp to a TXT File for Easier Tracking of Results?
I've had great success using go-llama.cpp to wrap llama in a much-friendlier language. The install process is a bit clunky- go does not like compiling submodules, so you need to use a replace within the go.mod file to point towards a local copy of go-llama.cpp that you've already compiled manually.
What are some alternatives?
SillyTavern - LLM Frontend for Power Users.
llama-cpp-python - Python bindings for llama.cpp
llama.cpp-dotnet - Minimal C# bindings for llama.cpp + .NET core library with API host/client.
llama.net - .NET wrapper for LLaMA.cpp for LLaMA language model inference on CPU. 🦙
llama_cpp.rb - llama_cpp provides Ruby bindings for llama.cpp
SciSharp-Stack-Examples - Practical examples written in SciSharp's machine learning libraries
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
LLamaStack - ASP.NET Core Web, WebApi & WPF implementations for LLama.cpp & LLamaSharp
llama-node - Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.