flash-attention
alpaca_lora_4bit
flash-attention | alpaca_lora_4bit | |
---|---|---|
27 | 41 | |
15,061 | 534 | |
4.3% | - | |
9.2 | 8.6 | |
6 days ago | about 1 year ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
flash-attention
-
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-Precision
1) Pretty much, it's mathematically equivalent. The only software issues are things like managing dependency versions and data formats in-memory, but Flash Attention 2 is already built into HuggingFace and other popular libraries. Flash Attention 3 probably will be soon, although it requires an H100 GPU to run
2) Flash Attention 2 added support for GQA in past version updates:
https://github.com/Dao-AILab/flash-attention
3) They're comparing this implementation of Flash Attention (which is written in raw CUDA C++) to the Triton implementation of a similar algorithm (which is written in Triton): https://triton-lang.org/main/getting-started/tutorials/06-fu...
-
How the Transformer Architecture Was Likely Discovered: A Step-by-Step Guide
If you're looking for an implementation, I highly recommend checking out fast attention [https://github.com/Dao-AILab/flash-attention]. It's my go-to, and far better than anything we could whip up here using just PyTorch or TensorFlow.
-
Interactive Coloring with ControlNet
* Even if I bought a 3090, I would have to get a computer to go with it, along with a PSU and some cooling. Don't know where to start with that.
[1] https://github.com/Dao-AILab/flash-attention/issues/190
-
Coding Self-Attention, Multi-Head Attention, Cross-Attention, Causal-Attention
highly recommend using Tri's implementation https://github.com/Dao-AILab/flash-attention rotary should be built in, and some group overseas even contributed alibi
-
PSA: new ExLlamaV2 quant method makes 70Bs perform much better at low bpw quants
Doesn't seem so https://github.com/Dao-AILab/flash-attention/issues/542 No updates for a while.
-
VLLM: 24x faster LLM serving than HuggingFace Transformers
I wonder how this compares to Flash Attention (https://github.com/HazyResearch/flash-attention), which is the other "memory aware" Attention project I'm aware of.
I guess Flash Attention is more about utilizing memory GPU SRam correctly, where this is more about using the OS/CPU memory better?
-
Hacking Around ChatGPT’s Character Limits with the Code Interpreter
https://github.com/HazyResearch/flash-attention
- Flash Attention on Consumer
-
Unlimiformer: Long-Range Transformers with Unlimited Length Input
After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.
I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.
-
Ask HN: Bypassing GPT-4 8k tokens limit
Longer sequence length in transformers is an active area of research (see e.g the great work from the Flash-attention team - https://github.com/HazyResearch/flash-attention), and I'm sure will improve things dramatically very soon.
alpaca_lora_4bit
-
Open Inference Engine Comparison | Features and Functionality of TGI, vLLM, llama.cpp, and TensorRT-LLM
For training there is also https://github.com/johnsmith0031/alpaca_lora_4bit
-
Quantized 8k Context Base Models for 4-bit Fine Tuning
I've been trying to fine tune an erotica model on some large context chat history (reverse proxy logs) and a literotica-instruct dataset I made, with a max context of 8k. The large context size eats a lot of VRAM so I've been trying to find the most efficient way to experiment considering I'd like to do multiple runs to test some ideas. So I'm going to try and use https://github.com/johnsmith0031/alpaca_lora_4bit, which is supposed to train faster and use less memory than qlora.
-
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
Follow up the popular work of u/tloen alpaca-lora, I wrapped the setup of alpaca_lora_4bit to add support for GPTQ training in form of installable pip packages. You can perform training and inference with multiple quantizations method to compare the results.
-
Does we still need monkey patch with exllama loader for lora?
" Using LoRAs with GPTQ-for-LLaMa This requires using a monkey patch that is supported by this web UI: https://github.com/johnsmith0031/alpaca_lora_4bit"
-
Why isn’t QLoRA being used more widely for fine tuning models?
4-bit GPTQ LoRA training was available since early April. I did not see any comparison to it in the QLoRA paper or even a mention, so it makes me think they were not aware it already existed.
- Fine-tuning with alpaca_lora_4bit on 8k context SuperHOT models
-
Any guide/intro to fine-tuning anywhere?
https://github.com/johnsmith0031/alpaca_lora_4bit is still the SOTA - Faster than qlora, trains on a GPTQ base.
-
"Samantha-33B-SuperHOT-8K-GPTQ" now that's a great name for a true model.
I would also like to know how one would finetune this in 4 bit? I think one could take the merged 8K PEFT with the LLaMA weights, and then quantize it to 4 bit, and then train with https://github.com/johnsmith0031/alpaca_lora_4bit ?
-
Help with QLoRA
I was under the impression that you just git clone this repo into text-generation-webui/repositories (so you would have GPTQ_for_Llama and alpaca_lora_4bit in the folder), and then just load with monkey patch. Is that not correct? I also tried just downloading alpaca_lora_4bit on its own, git cloning text-gen-webui within it, and installing requirements.txt for both and running with monkey patch. I was following the sections of alpaca_lora_4bit, "Text Generation Webui Monkey Patch" and "monkey patch inside webui"
-
Best uncensored model for an a6000
I dont have any familiarity with esxi, but I can say that there are quite a few posts about people doing it on proxmox. I've currently got a machine with 2x3090 passing through to VM's. When I'm training, I pass them both through to the same VM and can do lora 4-bit training on llama33 using https://github.com/johnsmith0031/alpaca_lora_4bit. Then, at inference time, I run a single card into a different VM, and have an extra card available for experimentation.
What are some alternatives?
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
safetensors - Simple, safe way to store and distribute tensors
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
StableLM - StableLM: Stability AI Language Models
RWKV-LM - RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
text-generation-webui-testing - A fork of textgen that still supports V1 GPTQ, 4-bit lora and other GPTQ models besides llama.
XMem - [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
peft - 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.