llama-node
go-llama.cpp
llama-node | go-llama.cpp | |
---|---|---|
2 | 4 | |
849 | 585 | |
0.9% | 9.6% | |
8.6 | 7.9 | |
10 months ago | about 15 hours ago | |
Rust | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama-node
-
Tell HN: Rust Is the Superglue
You can practice your Rust skills by writing performant and/or gluey extensions for higher-level language such as NodeJS (checkout napi-rs) and Python or complementing JS in the browser if you target Webassembly.
For instance, checkout Llama-node https://github.com/Atome-FE/llama-node for an involved Rust-based NodeJS extension. Python has PyO3, a Rust-Python extension toolset: https://github.com/PyO3/pyo3.
They can help you leverage your Rust for writing cool new stuff.
-
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
Node.js: hlhr202/llama-node
go-llama.cpp
- Lokale LLM: Gibt es bereits welche für <= 4 GB vRAM?
-
LocalAI v1.19.0 - CUDA GPU support!
Full CUDA GPU offload support ( PR by mudler. Thanks to chnyda for handing over the GPU access, and lu-zero to help in debugging )
-
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
Go: go-skynet/go-llama.cpp
-
Redirecting Model Outputs from llama.cpp to a TXT File for Easier Tracking of Results?
I've had great success using go-llama.cpp to wrap llama in a much-friendlier language. The install process is a bit clunky- go does not like compiling submodules, so you need to use a replace within the go.mod file to point towards a local copy of go-llama.cpp that you've already compiled manually.
What are some alternatives?
ChainFury - 🦋 Production grade chaining engine behind TuneChat. Self host today!
llama-cpp-python - Python bindings for llama.cpp
text-embeddings-inference - A blazing fast inference solution for text embeddings models
llama.cpp-dotnet - Minimal C# bindings for llama.cpp + .NET core library with API host/client.
LLamaSharp - A C#/.NET library to run LLM models (🦙LLaMA/LLaVA) on your local device efficiently.
llama_cpp.rb - llama_cpp provides Ruby bindings for llama.cpp
langchain-ask-pdf-local - An AI-app that allows you to upload a PDF and ask questions about it. It uses StableVicuna 13B and runs locally.
llama-cpp.el - A client for llama-cpp server
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
gpt4all.unity - Bindings of gpt4all language models for Unity3d running on your local machine
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.