flash-attention
whisper
flash-attention | whisper | |
---|---|---|
26 | 344 | |
10,888 | 60,617 | |
5.7% | 3.1% | |
9.4 | 6.4 | |
8 days ago | 4 days ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
flash-attention
-
How the Transformer Architecture Was Likely Discovered: A Step-by-Step Guide
If you're looking for an implementation, I highly recommend checking out fast attention [https://github.com/Dao-AILab/flash-attention]. It's my go-to, and far better than anything we could whip up here using just PyTorch or TensorFlow.
-
Interactive Coloring with ControlNet
* Even if I bought a 3090, I would have to get a computer to go with it, along with a PSU and some cooling. Don't know where to start with that.
[1] https://github.com/Dao-AILab/flash-attention/issues/190
-
Coding Self-Attention, Multi-Head Attention, Cross-Attention, Causal-Attention
highly recommend using Tri's implementation https://github.com/Dao-AILab/flash-attention rotary should be built in, and some group overseas even contributed alibi
-
PSA: new ExLlamaV2 quant method makes 70Bs perform much better at low bpw quants
Doesn't seem so https://github.com/Dao-AILab/flash-attention/issues/542 No updates for a while.
-
VLLM: 24x faster LLM serving than HuggingFace Transformers
I wonder how this compares to Flash Attention (https://github.com/HazyResearch/flash-attention), which is the other "memory aware" Attention project I'm aware of.
I guess Flash Attention is more about utilizing memory GPU SRam correctly, where this is more about using the OS/CPU memory better?
-
Hacking Around ChatGPT’s Character Limits with the Code Interpreter
https://github.com/HazyResearch/flash-attention
- Flash Attention on Consumer
-
Unlimiformer: Long-Range Transformers with Unlimited Length Input
After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.
I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.
-
Ask HN: Bypassing GPT-4 8k tokens limit
Longer sequence length in transformers is an active area of research (see e.g the great work from the Flash-attention team - https://github.com/HazyResearch/flash-attention), and I'm sure will improve things dramatically very soon.
-
Scaling Transformer to 1M tokens and beyond with RMT
Here's a list of tools for scaling up transformer context that have github repos:
* FlashAttention: In my experience, the current best solution for n² attention, but it's very hard to scale it beyond the low tens of thousands of tokens. Code: https://github.com/HazyResearch/flash-attention
* Heinsen Routing: In my experience, the current best solution for n×m attention. I've used it to pull up more than a million tokens as context. It's not a substitute for n² attention. Code: https://github.com/glassroom/heinsen_routing
* RWKV: A sort-of-recurrent model which claims to have performance comparable to n² attention in transformers. In my limited experience, it doesn't. Others agree: https://twitter.com/arankomatsuzaki/status/16390003799784038... . Code: https://github.com/BlinkDL/RWKV-LM
* RMT (this method): I'm skeptical that the recurrent connections will work as well as n² attention in practice, but I'm going to give it a try. Code: https://github.com/booydar/t5-experiments/tree/scaling-repor...
In addition, there's a group at Stanford working on state-space models that looks promising to me. The idea is to approximate n² attention dynamically using only O(n log n) compute. There's no code available, but here's a blog post about it: https://hazyresearch.stanford.edu/blog/2023-03-27-long-learn...
If anyone here has other suggestions for working with long sequences (hundreds of thousands to millions of tokens), I'd love to learn about them.
whisper
- Creando Subtítulos Automáticos para Vídeos con Python, Faster-Whisper, FFmpeg, Streamlit, Pillow
-
Why I Care Deeply About Web Accessibility And You Should Too
Let’s not talk about local models as the hardware requirements are way beyond most of these people’s reach. I have a MacBook Air with an M2 chip and 8GB of RAM and can hardly run Whisper locally, so I use this HuggingFace space.
-
How I built NotesGPT – a full-stack AI voice note app
Last week, I launched notesGPT, a free and open source voice note app that has 35,000 visitors, 7,000 users, and over 1,000 GitHub stars so far in the last week. It allows you to record a voice note, transcribes it uses Whisper, and uses Mixtral via Together to extract action items and display them in an action items view. It’s also fully open source and comes equipped with authentication, storage, vector search, action items, and is fully responsive on mobile for ease of use.
-
Ask HN: Can AI break a speech audio into individual words?
I found a pretty good discussion in the topic here:
https://github.com/openai/whisper/discussions/1243
-
WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
There is a plot of language performance on their repo: https://github.com/openai/whisper
I am not aware of a multi-lingual leaderboard for speech recognition models.
- Ask HN: AI that allows you to make phone calls in a language you don't speak?
-
Ask HN: Favorite Podcast Episodes of 2023?
I don't know how OP does it, but here's how I'd do it:
* Generate a transcript by runing Whisper against the podcast audio file: https://github.com/openai/whisper
* Upload transcript to ChatGPT and ask it to summarize.
* Automate all the above.
-
Need advice
Ahh, that makes sense. I've been building something like that, but only from other languages into English using Whisper
-
Subtitle is now open-source
Whisper already generates subtitles[0], supporting VTT and SRT so this is just a thin wrapper around that.
[0]: https://github.com/openai/whisper/blob/e58f28804528831904c3b...
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
> although it does require you to wear headphones so the bot doesn't hear itself and get interrupted.
Maybe you can rely on some sort of speaker identification to sort this out?
https://github.com/openai/whisper/discussions/264
What are some alternatives?
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
silero-vad - Silero VAD: pre-trained enterprise-grade Voice Activity Detector
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
memory-efficient-attention-pytorch - Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
whisper.cpp - Port of OpenAI's Whisper model in C/C++
alpaca_lora_4bit
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.