exllamav2
ollama
exllamav2 | ollama | |
---|---|---|
17 | 212 | |
3,010 | 66,540 | |
- | 23.9% | |
9.8 | 9.9 | |
3 days ago | 3 days ago | |
Python | Go | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
exllamav2
- Running Llama3 Locally
-
Mixture-of-Depths: Dynamically allocating compute in transformers
There are already some implementations out there which attempt to accomplish this!
Here's an example: https://github.com/silphendio/sliced_llama
A gist pertaining to said example: https://gist.github.com/silphendio/535cd9c1821aa1290aa10d587...
Here's a discussion about integrating this capability with ExLlama: https://github.com/turboderp/exllamav2/pull/275
And same as above but for llama.cpp: https://github.com/ggerganov/llama.cpp/issues/4718#issuecomm...
-
What do you use to run your models?
Sorry, I'm somewhat familiar with this term (I've seen it as a model loader in Oobabooga), but still not following the correlation here. Are you saying I should instead be using this project in lieu of llama.cpp? Or are you saying that there is, perhaps, an exllamav2 "extension" or similar within llama.cpp that I can use?
-
I just started having problems with the colab again. I get errors and it just stops. Help?
EDIT: I reported the bug to the exllamav2 Github. It's actually already fixed, just not on any current built release.
-
Yi-34B-200K works on a single 3090 with 47K context/4bpw
install exllamav2 from git with pip install git+https://github.com/turboderp/exllamav2.git. Make sure you have flash attention 2 as well.
-
Tested: ExllamaV2's max context on 24gb with 70B low-bpw & speculative sampling performance
Recent releases for exllamav2 brings working fp8 cache support, which I've been very excited to test. This feature doubles the maximum context length you can run with your model, without any visible downsides.
-
Show HN: Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context
Without batching, I was actually thinking that's kind of modest.
ExllamaV2 will get 48 tokens/s on a 4090, which is much slower/cheaper than an H100:
https://github.com/turboderp/exllamav2#performance
I didn't test codellama, but the 3090 TI figures are in the ballpark of my generation speed on a 3090.
-
Guide for Llama2 70b model merging and exllama2 quantization
First, you need the convert.py script from turboderp's Exllama2 repo. You can read all about the convert.py arguments here.
-
LLM Falcon 180B Needs 720GB RAM to Run
> brute aggressive quantization
Cutting edge quantization like ExLlama's EX2 is far from brute force: https://github.com/turboderp/exllamav2#exl2-quantization
> The format allows for mixing quantization levels within a model to achieve any average bitrate between 2 and 8 bits per weight. Moreover, it's possible to apply multiple quantization levels to each linear layer, producing something akin to sparse quantization wherein more important weights (columns) are quantized with more bits. The same remapping trick that lets ExLlama work efficiently with act-order models allows this mixing of formats to happen with little to no impact on performance. Parameter selection is done automatically by quantizing each matrix multiple times, measuring the quantization error (with respect to the chosen calibration data) for each of a number of possible settings, per layer. Finally, a combination is chosen that minimizes the maximum quantization error over the entire model while meeting a target average bitrate.
Llama.cpp is also working on a feature that let's a small model "guess" the output of a big model which then "checks" it for correctness. This is more of a performance feature, but you could also arrange it to accelerate a big model on a small GPU.
- 70B Llama 2 at 35tokens/second on 4090
ollama
- Ollama v0.1.34 Is Out
-
Ask HN: What do you use local LLMs for?
- Basic internet search (I start ollama CLI faster than I can start a browser - https://ollama.com)
- Formatting/changing text
- Troubleshooting code, esp. new frameworks/libs
- Recipes
- Data entry
- Organizing thoughts: High-level lists, comparison, classification, synonyms, jargon & nomenclature
- Learning esp. by analogy and example
RAG for:
- Website assistants (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Game NPCs (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Discord/Slack/forum bots (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Character-driven storytelling and creating art in a specific style for video game loading screens, background images, avatars, website art, etc. (https://github.com/bennyschmidt/ragdoll-studio/tree/master/r...)
- FLaNK-AIM Weekly 06 May 2024
-
Introducing Jan
Jan goes a step further by integrating with other local engines like LM Studio and ollama.
- Ollama v0.1.33
-
Hindi-Language AI Chatbot for Enterprises Using Qdrant, MLFlow, and LangChain
# install the Ollama curl -fsSL https://ollama.com/install.sh | sh # get the llama3 model ollama pull llama2 # install the MLFlow pip install mlflow
-
Create an AI prototyping environment using Jupyter Lab IDE with Typescript, LangChain.js and Ollama for rapid AI prototyping
Ollama for running LLMs locally
-
Setup Llama 3 using Ollama and Open-WebUI
curl -fsSL https://ollama.com/install.sh | sh
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
Streaming is not a problem (it's just a simple flag: https://github.com/wiktor-k/llama-chat/blob/main/index.ts#L2...) but I've never used voice input.
The examples show image input though: https://github.com/ollama/ollama/blob/main/docs/api.md#reque...
Maybe you can file an issue here: https://github.com/ollama/ollama/issues
-
I Said Goodbye to ChatGPT and Hello to Llama 3 on Open WebUI - You Should Too
I’m a huge fan of open source models, especially the newly release Llama 3. Because of the performance of both the large 70B Llama 3 model as well as the smaller and self-host-able 8B Llama 3, I’ve actually cancelled my ChatGPT subscription in favor of Open WebUI, a self-hostable ChatGPT-like UI that allows you to use Ollama and other AI providers while keeping your chat history, prompts, and other data locally on any computer you control.
What are some alternatives?
llama.cpp - LLM inference in C/C++
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
gpt4all - gpt4all: run open-source LLMs anywhere
SillyTavern - LLM Frontend for Power Users.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
ChatGPT-AutoExpert - 🚀🧠💬 Supercharged Custom Instructions for ChatGPT (non-coding) and ChatGPT Advanced Data Analysis (coding).
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
OmniQuant - [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.
llama - Inference code for Llama models
BlockMerge_Gradient - Merge Transformers language models by use of gradient parameters.
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.