exllama
ggllm.cpp
exllama | ggllm.cpp | |
---|---|---|
64 | 8 | |
2,609 | 242 | |
- | - | |
9.0 | 9.5 | |
7 months ago | 4 months ago | |
Python | C | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
exllama
-
Any way to optimally use GPU for faster llama calls?
not using exllama seems like the tremendous waste
- ExLlama: Memory efficient way to run Llama
- Ask HN: Cheapest hardware to run Llama 2 70B
-
Llama Is Expensive
> We serve Llama on 2 80-GB A100 GPUs, as that is the minumum required to fit Llama in memory (with 16-bit precision)
Well there is your problem.
LLaMA quantized to 4 bits fits in 40GB. And it gets similar throughput split between dual consumer GPUs, which likely means better throughput on a single 40GB A100 (or a cheaper 48GB Pro GPU)
https://github.com/turboderp/exllama#dual-gpu-results
Also, I'm not sure which model was tested, but Llama 70B chat should have better performance than the base model if the prompting syntax is right. That was only reverse engineered from the Meta demo implementation recently.
-
Accessing Llama 2 from the command-line with the LLM-replicate plugin
For those getting started, the easiest one click installer I've used is Nomic.ai's gpt4all: https://gpt4all.io/
This runs with a simple GUI on Windows/Mac/Linux, leverages a fork of llama.cpp on the backend and supports GPU acceleration, and LLaMA, Falcon, MPT, and GPT-J models. It also has API/CLI bindings.
I just saw a slick new tool https://ollama.ai/ that will let you install a llama2-7b with a single `ollama run llama2` command that has a very simple 1-click installer for Apple Silicon Mac (but need to build from source for anything else atm). It looks like it only supports llamas OOTB but it also seems to use llama.cpp (via Go adapter) on the backend - it seemed to be CPU-only on my MBA, but I didn't poke too much and it's brand new, so we'll see.
For anyone on HN, they should probably be looking at https://github.com/ggerganov/llama.cpp and https://github.com/ggerganov/ggml directly. If you have a high-end Nvidia consumer card (3090/4090) I'd highly recommend looking into https://github.com/turboderp/exllama
For those generally confused, the r/LocalLLaMA wiki is a good place to start: https://www.reddit.com/r/LocalLLaMA/wiki/guide/
I've also been porting my own notes into a single location that tracks models, evals, and has guides focused on local models: https://llm-tracker.info/
-
GPT-4 Details Leaked
Deploying the 60B version is a challenge though and you might need to apply 4-bit quantization with something like https://github.com/PanQiWei/AutoGPTQ or https://github.com/qwopqwop200/GPTQ-for-LLaMa . Then you can improve the inference speed by using https://github.com/turboderp/exllama .
If you prefer to use an "instruct" model à la ChatGPT (i.e. that does not need few-shot learning to output good results) you can use something like this: https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored...
-
Multi-GPU questions
Exllama for example uses buffers on each card that reduce the amount of VRAM available for model and context, see here. https://github.com/turboderp/exllama/issues/121
-
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
For inference step, this repo can help you to use ExLlama to perform inference on an evaluation dataset for the best throughput.
-
GPT-4 API general availability
In terms of speed, we're talking about 140t/s for 7B models, and 40t/s for 33B models on a 3090/4090 now.[1] (1 token ~= 0.75 word) It's quite zippy. llama.cpp performs close on Nvidia GPUs now (but they don't have a handy chart) and you can get decent performance on 13B models on M1/M2 Macs.
You can take a look at a list of evals here: https://llm-tracker.info/books/evals/page/list-of-evals - for general usage, I think home-rolled evals like llm-jeopardy [2] and local-llm-comparison [3] by hobbyists are more useful than most of the benchmark rankings.
That being said, personally I mostly use GPT-4 for code assistance to that's what I'm most interested in, and the latest code assistants are scoring quite well: https://github.com/abacaj/code-eval - a recent replit-3b fine tune the human-eval results for open models (as a point of reference, GPT-3.5 gets 60.4 on pass@1 and 68.9 on pass@10 [4]) - I've only just started playing around with it since replit model tooling is not as good as llamas (doc here: https://llm-tracker.info/books/howto-guides/page/replit-mode...).
I'm interested in potentially applying reflexion or some of the other techniques that have been tried to even further increase coding abilities. (InterCode in particular has caught my eye https://intercode-benchmark.github.io/)
[1] https://github.com/turboderp/exllama#results-so-far
[2] https://github.com/aigoopy/llm-jeopardy
[3] https://github.com/Troyanovsky/Local-LLM-comparison/tree/mai...
[4] https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder
-
Local LLMs GPUs
That's a 16GB GPU, you should be able to fit 13B at 4bit: https://github.com/turboderp/exllama
ggllm.cpp
-
Is there a way to use a quantized Falcon 40B with SillyTavern (on Apple Silicon)
I'd like to try https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-40B-GGML with SillyTavern (running on Apple Silicon). The only way I've found to run Falcon 40B quantized on Apple Silicon is with https://github.com/cmp-nct/ggllm.cpp but I haven't figured out any way to get SillyTavern to use that as a local model. Does anyone know of a way to get this working?
-
How Is LLaMa.cpp Possible?
It doesn't support Falcon right now, but there's a fork that does (https://github.com/cmp-nct/ggllm.cpp/).
- Alfred-40B, an OSS RLHF version of Falcon40B
-
Falcon ggml/ggcc with langchain
To load falcon models with the new file format ggcc wich is a new file format similar to ggml, I'm using this tool: https://github.com/cmp-nct/ggllm.cpp Wich is a fork from : https://github.com/ggerganov/llama.cpp
-
Show HN: Danswer – open-source question answering across all your docs
The GGLLM fork seems to be the leading falcon winner for now [1]
It comes with its own variant of the GGML sub format "ggcv1" but there's quants available on HF [2]
Although if you have a GPU I'd go with the newly released AWQ quantization instead [3] the performance is better.
(I may or may not have a mild local LLM addiction - and video cards cost more then drugs)
[1] https://github.com/cmp-nct/ggllm.cpp
[2] https://huggingface.co/TheBloke/falcon-7b-instruct-GGML
[3] https://huggingface.co/abhinavkulkarni/tiiuae-falcon-7b-inst...
-
ChatGPT loses users for first time, shaking faith in AI revolution
For base tooling, things like:
https://huggingface.co/ (finding models and downloading them)
https://github.com/ggerganov/llama.cpp (llama)
https://github.com/cmp-nct/ggllm.cpp (falcon)
For interactive work (art/chat/research/playing around), things like:
https://github.com/oobabooga/text-generation-webui/blob/main... (llama) (Also - they just added a decent chat server built into llama.cpp the project)
https://github.com/invoke-ai/InvokeAI (stable-diffusion)
Plus a bunch of hacked together scripts.
Some example models (I'm linking to quantized versions that someone else has made, but the tooling is in the above repos to create them from the published fp16 models)
https://huggingface.co/TheBloke/llama-65B-GGML
https://huggingface.co/TheBloke/falcon-40b-instruct-GPTQ
https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored...
etc. Hugging face has quite a number, although some require filling out forms for the base models for tuning/training.
- Falcon LLM – A 40B Model
-
Run machine learning on 7900XT/7900XTX using ROCm 5.5.0 on Ubuntu 22.04
I did another test running LLM model (gpt4all-falcon) quantized to Q5_0 and Q5_1 to AMD GPU (https://huggingface.co/nomic-ai/gpt4all-falcon). I used this awesome project (forked from https://github.com/ggerganov/llama.cpp to https://github.com/cmp-nct/ggllm.cpp). I hipified the CUDA file into HIP code. and made some modifications on it (PR: https://github.com/cmp-nct/ggllm.cpp/pull/3). Checkout https://huggingface.co/nomic-ai/gpt4all-falcon
What are some alternatives?
llama.cpp - LLM inference in C/C++
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
llama2.cs - Inference Llama 2 in one file of pure C#
GPTQ-for-LLaMa - 4 bits quantization of LLaMa using GPTQ
curated-transformers - 🤖 A PyTorch library of curated Transformer models and their composable components
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
KoboldAI
GPTCache - Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
text-generation-inference - Large Language Model Text Generation Inference
danswer - Gen-AI Chat for Teams - Think ChatGPT if it had access to your team's unique knowledge.