Fast and memory-efficient exact attention
Why do you think that https://github.com/johnsmith0031/alpaca_lora_4bit is a good alternative to flash-attention
Fast and memory-efficient exact attention
Why do you think that https://github.com/johnsmith0031/alpaca_lora_4bit is a good alternative to flash-attention