Suggest an alternative to

flash-attention-minimal

Flash Attention in ~100 lines of CUDA (forward pass only)

Why do you think that https://github.com/halide/Halide is a good alternative to flash-attention-minimal

A URL to the alternative repo (e.g. GitHub, GitLab)

Here you can share your experience with the project you are suggesting or its comparison with flash-attention-minimal. Optional.

A valid email to send you a verification link when necessary or log in.