flash-attention VS kernl

Compare flash-attention vs kernl and see what are their differences.

flash-attention

Fast and memory-efficient exact attention (by Dao-AILab)

kernl

Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable. (by ELS-RD)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
flash-attention kernl
25 8
10,263 1,446
8.8% 1.9%
9.4 1.5
3 days ago about 1 month ago
Python Jupyter Notebook
BSD 3-clause "New" or "Revised" License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

flash-attention

Posts with mentions or reviews of flash-attention. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-10.

kernl

Posts with mentions or reviews of kernl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-08.
  • [P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl
    7 projects | /r/MachineLearning | 8 Feb 2023
    I periodically check kernl.ai to see whether the documentation and tutorial sections have been expanded. My advice is put some real effort and focus in to examples and tutorials. It is key for an optimization/acceleration library. 10x-ing the users of a library like this is much more likely to come from spending 10 out of every 100 developer hours writing tutorials, as opposed to spending those 8 or 9 of those tutorial-writing hours on developing new features which only a small minority understand how to apply.
    7 projects | /r/MachineLearning | 8 Feb 2023
    Kernl repository: https://github.com/ELS-RD/kernl
  • [P] BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
    3 projects | /r/MachineLearning | 22 Nov 2022
    FlashAttention + quantization has to the best of knowledge not yet been explored, but I think it would a great engineering direction. I would not expect to see this any time soon natively in PyTorch's BetterTransformer though. /u/pommedeterresautee & folks at ELS-RD made an awesome work releasing kernl where custom implementations (through OpenAI Triton) could maybe easily live.
  • [D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
    8 projects | /r/MachineLearning | 28 Oct 2022
    Check https://github.com/ELS-RD/kernl/blob/main/src/kernl/optimizer/linear.py for an example.
  • [P] Up to 12X faster GPU inference on Bert, T5 and other transformers with OpenAI Triton kernels
    8 projects | /r/MachineLearning | 25 Oct 2022
    https://github.com/ELS-RD/kernl/issues/141 > Would it be possible to use kernl to speed up Stable Diffusion?
    8 projects | /r/MachineLearning | 25 Oct 2022
    Quite surprisingly, RMSNorm bring a huge unexpected speedup on what we already had! If you want to follow this work: https://github.com/ELS-RD/kernl/pull/107
    8 projects | /r/MachineLearning | 25 Oct 2022
    Scripts are here: https://github.com/ELS-RD/kernl/tree/main/experimental/benchmarks
    8 projects | /r/MachineLearning | 25 Oct 2022
    We are releasing Kernl under Apache 2 license, a library to make PyTorch models inference significantly faster. With 1 line of code we applied the optimizations and made Bert up to 12X faster than Hugging Face baseline. T5 is also covered in this first release (> 6X speed up generation and we are still halfway in the optimizations!). This has been possible because we wrote custom GPU kernels with the new OpenAI programming language Triton and leveraged TorchDynamo.

What are some alternatives?

When comparing flash-attention and kernl you can also consider the following projects:

xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.

TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

alpaca_lora_4bit

XMem - [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model

RWKV-v2-RNN-Pile - RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.

StableLM - StableLM: Stability AI Language Models

openai-whisper-cpu - Improving transcription performance of OpenAI Whisper for CPU based deployment

quality

diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch