llama_cpp.rb
llama.cpp
llama_cpp.rb | llama.cpp | |
---|---|---|
2 | 777 | |
143 | 57,463 | |
- | - | |
9.6 | 10.0 | |
7 days ago | 7 days ago | |
C++ | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama_cpp.rb
-
Llama.cpp: Full CUDA GPU Acceleration
Python sits on the C-glue segment of programming languages (where Perl, PHP, Ruby and Node are also notable members). Being a glue language means having APIs to a lot of external toolchains written in not only C/C++ but many other compiled languages, APIs and system resources. Conda, virtualenv, etc. are godsend modules for making it all work, or even better, to freeze things once they all work, without resourcing to Docker, VMs or shell scripts. It's meant for application and DevOps people who need to slap together, ie, ML, Numpy, Elasticsearch, AWS APIs and REST endpoints and Get $hit Done.
It's annoying to see them "glueys" compared to the binary compiled segment where the heavy lifting is done. Python and others exist to latch on and assimilate. Resistance is futile:
https://pypi.org/project/pyllamacpp/
https://www.npmjs.com/package/llama-node
https://packagist.org/packages/kambo/llama-cpp-php
https://github.com/yoshoku/llama_cpp.rb
-
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
Ruby: yoshoku/llama_cpp.rb
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
go-llama.cpp - LLama.cpp golang bindings
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
llama.cpp-dotnet - Minimal C# bindings for llama.cpp + .NET core library with API host/client.
gpt4all - gpt4all: run open-source LLMs anywhere
LLamaSharp - A C#/.NET library to run LLM models (🦙LLaMA/LLaVA) on your local device efficiently.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
flake - A Nix flake for many AI projects
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
llama-cpp.el - A client for llama-cpp server
ggml - Tensor library for machine learning
llama-go - Port of Facebook's LLaMA (Large Language Model Meta AI) in Golang with embedded C/C++
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM