llama-node
LLamaSharp
llama-node | LLamaSharp | |
---|---|---|
2 | 3 | |
849 | 2,043 | |
0.9% | 15.2% | |
8.6 | 9.8 | |
10 months ago | about 17 hours ago | |
Rust | C# | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama-node
-
Tell HN: Rust Is the Superglue
You can practice your Rust skills by writing performant and/or gluey extensions for higher-level language such as NodeJS (checkout napi-rs) and Python or complementing JS in the browser if you target Webassembly.
For instance, checkout Llama-node https://github.com/Atome-FE/llama-node for an involved Rust-based NodeJS extension. Python has PyO3, a Rust-Python extension toolset: https://github.com/PyO3/pyo3.
They can help you leverage your Rust for writing cool new stuff.
-
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
Node.js: hlhr202/llama-node
LLamaSharp
-
This is getting really complicated.
For example, I have my own task and I need another tool, so I search and find what I need. https://github.com/SciSharp/LLamaSharp and this allows me to take the next step https://github.com/Xsanf/LLaMa_Unity . I can already run LLM on Unity. And this is already an opportunity to use it in games natively.
-
cannot for the life of me compile libllama.dll
I searched through GitHub and nothing comes up that is new. I wanted to run the model through the C# wrapper linked on LLaMASharp which requires compiling llama.cpp and extracting the libllama dll into the C# project files. When I build llama.cpp with OpenBLAS, everything shows up fine in the command line. Just as the link suggests I make sure to set DBUILD_SHARED_LIBS=ON when in CMake. However, the output in the Visual Studio Developer Command Line interface ignores the setup for libllama.dll in the CMakeFiles.txt entirely. The only dll to compile is llama.dll; I know this is a fairly technical question but does anyone know how to fix?
-
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
C#/.NET: SciSharp/LLamaSharp
What are some alternatives?
ChainFury - 🦋 Production grade chaining engine behind TuneChat. Self host today!
SillyTavern - LLM Frontend for Power Users.
text-embeddings-inference - A blazing fast inference solution for text embeddings models
llama.cpp-dotnet - Minimal C# bindings for llama.cpp + .NET core library with API host/client.
langchain-ask-pdf-local - An AI-app that allows you to upload a PDF and ask questions about it. It uses StableVicuna 13B and runs locally.
llama.net - .NET wrapper for LLaMA.cpp for LLaMA language model inference on CPU. 🦙
llama-cpp.el - A client for llama-cpp server
SciSharp-Stack-Examples - Practical examples written in SciSharp's machine learning libraries
gpt4all.unity - Bindings of gpt4all language models for Unity3d running on your local machine
LLamaStack - ASP.NET Core Web, WebApi & WPF implementations for LLama.cpp & LLamaSharp
langtorch - 🔥 Building composable LLM applications & workflow with Java.
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.