XMem VS flash-attention

Compare XMem vs flash-attention and see what are their differences.

XMem

[ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model (by hkchengrex)

flash-attention

Fast and memory-efficient exact attention (by HazyResearch)
Our great sponsors
  • ONLYOFFICE ONLYOFFICE Docs — document collaboration in your environment
  • Sonar - Write Clean Python Code. Always.
  • InfluxDB - Access the most powerful time series database as a service
  • CodiumAI - TestGPT | Generating meaningful tests for busy devs
XMem flash-attention
10 21
1,171 3,195
- 31.3%
8.2 7.4
9 days ago 9 days ago
Python Python
GNU General Public License v3.0 only BSD 3-clause "New" or "Revised" License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

XMem

Posts with mentions or reviews of XMem. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-25.

flash-attention

Posts with mentions or reviews of flash-attention. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-05.

What are some alternatives?

When comparing XMem and flash-attention you can also consider the following projects:

memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

TensorRT - NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.

xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

alpaca_lora_4bit

yolov7 - Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

StableLM - StableLM: Stability AI Language Models

quality

deeplab2 - DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.

RWKV-v2-RNN-Pile - RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.

CodeRL - This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (NeurIPS22).