llama_cpp.rb VS llama.cpp

Compare llama_cpp.rb vs llama.cpp and see what are their differences.

llama_cpp.rb

llama_cpp provides Ruby bindings for llama.cpp (by yoshoku)

llama.cpp

Port of Facebook's LLaMA model in C/C++ (by SlyEcho)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
llama_cpp.rb llama.cpp
2 1
143 4
- -
9.6 9.4
8 days ago 9 months ago
C++ C
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

llama_cpp.rb

Posts with mentions or reviews of llama_cpp.rb. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-12.
  • Llama.cpp: Full CUDA GPU Acceleration
    14 projects | news.ycombinator.com | 12 Jun 2023
    Python sits on the C-glue segment of programming languages (where Perl, PHP, Ruby and Node are also notable members). Being a glue language means having APIs to a lot of external toolchains written in not only C/C++ but many other compiled languages, APIs and system resources. Conda, virtualenv, etc. are godsend modules for making it all work, or even better, to freeze things once they all work, without resourcing to Docker, VMs or shell scripts. It's meant for application and DevOps people who need to slap together, ie, ML, Numpy, Elasticsearch, AWS APIs and REST endpoints and Get $hit Done.

    It's annoying to see them "glueys" compared to the binary compiled segment where the heavy lifting is done. Python and others exist to latch on and assimilate. Resistance is futile:

    https://pypi.org/project/pyllamacpp/

    https://www.npmjs.com/package/llama-node

    https://packagist.org/packages/kambo/llama-cpp-php

    https://github.com/yoshoku/llama_cpp.rb

  • Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
    8 projects | /r/LocalLLaMA | 16 May 2023
    Ruby: yoshoku/llama_cpp.rb

llama.cpp

Posts with mentions or reviews of llama.cpp. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-12.
  • Llama.cpp: Full CUDA GPU Acceleration
    14 projects | news.ycombinator.com | 12 Jun 2023
    llama.cpp can be run with a speedup for AMD GPUs when compiled with `LLAMA_CLBLAST=1` and there is also a HIPified fork [1] being worked on by a community contributor. The other week I was poking on how hard it would be to get an AMD card running w/ acceleration on Linux and was pleasantly surprised, it wasn't too bad: https://mostlyobvious.org/?link=/Reference%2FSoftware%2FGene...

    That being said, it's important to note that ROCm is Linux only. Not only that, but ROCm's GPU support has actually been decreasing over the past few years. The current list: https://rocm.docs.amd.com/en/latest/release/gpu_os_support.h... Previously (2022): https://docs.amd.com/bundle/Hardware_and_Software_Reference_...

    The ELI5 is that a few years back, AMD split their graphics (RDNA) and compute (CDNA) architectures, which Nvidia does too, but notably (what Nvidia definitely doesn't do, and a key to their success IMO) AMD also decided they would simply not support any CUDA-parity compute features on Windows or their non "compute" cards. In practice, this means that community/open-source developers will never have, tinker, port, or develop on AMD hardware, while on Nvidia you can start with a GTX/RTX card on your laptop, and use the same code up to an H100 or DGX.

    llama.cpp is a super-high profile project, has almost 200 contributiors now, but AFAIK, no contributors from AMD. If AMD doesn't have the manpower, IMO they should simply be sending nsa free hardware to top open source project/library developers (and on the software side, their #1 priority should be making sure every single current GPU they sell is at least "enabled" if not "supported" in ROCm, on Linux and Windows).

    [1] https://github.com/SlyEcho/llama.cpp/tree/hipblas

What are some alternatives?

When comparing llama_cpp.rb and llama.cpp you can also consider the following projects:

go-llama.cpp - LLama.cpp golang bindings

flake - A Nix flake for many AI projects

llama.cpp-dotnet - Minimal C# bindings for llama.cpp + .NET core library with API host/client.

whisper.cpp - Port of OpenAI's Whisper model in C/C++

LLamaSharp - A C#/.NET library to run LLM models (🦙LLaMA/LLaVA) on your local device efficiently.

lit-llama - Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

llama.cpp - LLM inference in C/C++

llama-cpp.el - A client for llama-cpp server

TokenHawk - WebGPU LLM inference tuned by hand [Moved to: https://github.com/kayvr/token-hawk]

llama-go - Port of Facebook's LLaMA (Large Language Model Meta AI) in Golang with embedded C/C++