flash-attention
XMem
flash-attention | XMem | |
---|---|---|
27 | 11 | |
15,142 | 1,799 | |
4.9% | 1.2% | |
9.2 | 4.1 | |
7 days ago | 2 months ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
flash-attention
-
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-Precision
1) Pretty much, it's mathematically equivalent. The only software issues are things like managing dependency versions and data formats in-memory, but Flash Attention 2 is already built into HuggingFace and other popular libraries. Flash Attention 3 probably will be soon, although it requires an H100 GPU to run
2) Flash Attention 2 added support for GQA in past version updates:
https://github.com/Dao-AILab/flash-attention
3) They're comparing this implementation of Flash Attention (which is written in raw CUDA C++) to the Triton implementation of a similar algorithm (which is written in Triton): https://triton-lang.org/main/getting-started/tutorials/06-fu...
-
How the Transformer Architecture Was Likely Discovered: A Step-by-Step Guide
If you're looking for an implementation, I highly recommend checking out fast attention [https://github.com/Dao-AILab/flash-attention]. It's my go-to, and far better than anything we could whip up here using just PyTorch or TensorFlow.
-
Interactive Coloring with ControlNet
* Even if I bought a 3090, I would have to get a computer to go with it, along with a PSU and some cooling. Don't know where to start with that.
[1] https://github.com/Dao-AILab/flash-attention/issues/190
-
Coding Self-Attention, Multi-Head Attention, Cross-Attention, Causal-Attention
highly recommend using Tri's implementation https://github.com/Dao-AILab/flash-attention rotary should be built in, and some group overseas even contributed alibi
-
PSA: new ExLlamaV2 quant method makes 70Bs perform much better at low bpw quants
Doesn't seem so https://github.com/Dao-AILab/flash-attention/issues/542 No updates for a while.
-
VLLM: 24x faster LLM serving than HuggingFace Transformers
I wonder how this compares to Flash Attention (https://github.com/HazyResearch/flash-attention), which is the other "memory aware" Attention project I'm aware of.
I guess Flash Attention is more about utilizing memory GPU SRam correctly, where this is more about using the OS/CPU memory better?
-
Hacking Around ChatGPT’s Character Limits with the Code Interpreter
https://github.com/HazyResearch/flash-attention
- Flash Attention on Consumer
-
Unlimiformer: Long-Range Transformers with Unlimited Length Input
After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.
I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.
-
Ask HN: Bypassing GPT-4 8k tokens limit
Longer sequence length in transformers is an active area of research (see e.g the great work from the Flash-attention team - https://github.com/HazyResearch/flash-attention), and I'm sure will improve things dramatically very soon.
XMem
-
[D] Which open source models can replicate wonder dynamics's drag'n'drop cg characters?
Use Segmentation Model (SAM) combined with Inpainting model (E2FGVI) and Xmem to cut out the live action subject.
-
Track-Anything: a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything and XMem.
Nvm just found the occlusion video on https://github.com/hkchengrex/XMem holy shit
- XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
-
[D] Most important AI Paper´s this year so far in my opinion + Proto AGI speculation at the end
XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model ( Added because of the Atkinson-Shiffrin Memory Model ) Paper: https://arxiv.org/abs/2207.07115 Github: https://github.com/hkchengrex/XMem
- [D] Most Popular AI Research July 2022 pt. 2 - Ranked Based On GitHub Stars
- Most Popular AI Research July 2022 pt. 2 - Ranked Based On GitHub Stars
-
I trained a neural net to watch Super Smash Bros
Yeah MiVOS would speed up your tagging a lot. I also was curious if you saw XMem which just came out. I found that worked really well too.
-
University of Illinois Researchers Develop XMem; A Long-Term Video Object Segmentation Architecture Inspired By Atkinson-Shiffrin Memory Model
Continue reading | Check out the paper and github link.
-
[R] Unicorn: 🦄 : Towards Grand Unification of Object Tracking(Video Demo)
Have you check XMem?
What are some alternatives?
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
yolov7 - Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
NUWA - A unified 3D Transformer Pipeline for visual synthesis
RWKV-LM - RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.
deeplab2 - DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
EfficientZero - Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021.
alpaca_lora_4bit
NAFNet - The state-of-the-art image restoration model without nonlinear activation functions.