llama.go
llama.cpp
llama.go | llama.cpp | |
---|---|---|
12 | 782 | |
1,178 | 58,425 | |
- | - | |
8.2 | 10.0 | |
6 months ago | 4 days ago | |
Go | C++ | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama.go
-
Understanding GPT Tokenizers
You might reuse simple LLaMA tokenizer right in your Go code, look there:
https://github.com/gotzmann/llama.go/blob/8cc54ca81e6bfbce25...
-
April 2023
llama.go is like llama.cpp in pure Golang (https://github.com/gotzmann/llama.go)
- llama.go v1.4 - introduces Rest API for your GPT services
- [Golang] Llama.go - Meta's Llama GPT Inférence dans Pure Golang
- LLaMA.go v1.4: now with scalable REST API exposing local GPT model
- Local LLaMA REST API with llama.go v1.4
- LLaMA.go v1.4 - introducing REST API for building your own GPT services
-
MiniGPT-4
I'm developing framework [1] in Golang with this goal in mind :) It successfully runs relatively big LLM right now, and diffusion models will be the next step
[1] https://github.com/gotzmann/llama.go/
- gotzmann/llama.go: llama.go is like llama.cpp in pure Golang!
- Show HN: Llama.go – port of llama.cpp to pure Go
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
Flowise - Drag & drop UI to build your customized LLM flow
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
gpt4all.unity - Bindings of gpt4all language models for Unity3d running on your local machine
gpt4all - gpt4all: run open-source LLMs anywhere
nn-zero-to-hero - Neural Networks: Zero to Hero
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
tokenizer - Pure Go implementation of OpenAI's tiktoken tokenizer
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
LLamaStack - ASP.NET Core Web, WebApi & WPF implementations for LLama.cpp & LLamaSharp
ggml - Tensor library for machine learning
langchain-alpaca - Run Alpaca LLM in LangChain
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM