local_llama VS llama.cpp

Compare local_llama vs llama.cpp and see what are their differences.

local_llama

This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies. (by jlonge4)

llama.cpp

LLM inference in C/C++ (by ggerganov)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
WorkOS - The modern identity platform for B2B SaaS
The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
workos.com
featured
local_llama llama.cpp
10 772
179 56,891
- -
6.6 10.0
10 days ago 4 days ago
Python C++
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

local_llama

Posts with mentions or reviews of local_llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-22.

llama.cpp

Posts with mentions or reviews of llama.cpp. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-21.

What are some alternatives?

When comparing local_llama and llama.cpp you can also consider the following projects:

h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks

gpt4all - gpt4all: run open-source LLMs anywhere

EmbedAI - An app to interact privately with your documents using the power of GPT, 100% privately, no data leaks

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

zep - Zep: Long-Term Memory for ‍AI Assistants.

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

chatdocs - Chat with your documents offline using AI.

ggml - Tensor library for machine learning

LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.

alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM