Flash Attention in ~100 lines of CUDA (forward pass only)
Why do you think that https://github.com/halide/Halide is a good alternative to flash-attention-minimal
Flash Attention in ~100 lines of CUDA (forward pass only)
Why do you think that https://github.com/halide/Halide is a good alternative to flash-attention-minimal