streaming-llm
whisper.cpp
streaming-llm | whisper.cpp | |
---|---|---|
11 | 187 | |
6,255 | 31,817 | |
2.9% | - | |
7.2 | 9.8 | |
2 months ago | 5 days ago | |
Python | C | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
streaming-llm
-
[D] In decoder models, if later tokens attend to early tokens but early tokens don't attend to later tokens, what stops the influence of the early tokens from growing with each layer?
Just quickly glanced through the question but you might be interested in the attention sink for example where they use the fact that earlier tokens are overly attended to in general --> paper: https://arxiv.org/abs/2309.17453
-
[D] Why are decoder only models used for autoregressive generation instead of encoder-only models? What value is the causal mask if the new token doesn't exist yet?
That is a great question. I wish I had a mathematical explanation for it, but I can only provide some intuitive "yeah but then again"... Fwiw, there was a paper recently that indeed showed that the first few tokens of any sequence, starting with the special '[START]' token does hold special information (they call it the Attention Sink) compared to all other tokens. Here is a link to that paper:https://arxiv.org/abs/2309.17453
-
Distil-Whisper: distilled version of Whisper that is 6 times faster, 49% smaller
Oh yes, that's absolutely true - faster is better for everyone. It's just that this particular breakpoint would put realtime transcription on a $17 device with an amazing support ecosystem. It's wild.
That being said, even with this distillation there's still the aspect that Whisper isn't really designed for streaming. It's fairly simplistic and always deals with 30 second windows. I was expecting there to have been some sort of useful transform you could do to the model to avoid quite so much reprocessing per frame, but other than https://github.com/mit-han-lab/streaming-llm (which I'm not even sure directly helps) I haven't noticed anything out there.
-
Nvidia Trains LLM on Chip Design
See https://github.com/mit-han-lab/streaming-llm and others. There's good reason to believe that attention networks learn how to update their own weights (Forget the paper) based on their input. The attention mechanism can act like a delta to update weights as the data propagates through the layers. The issue is getting the token embeddings to be more than just the 50k or so that we use for the english language so you can explore the full space, which is what the attention sink mechanism is trying to do.
-
LLMs for the infinite input lengths are here!
๐Research Paper: https://arxiv.org/pdf/2309.17453.pdf ๐ป Code: https://github.com/mit-han-lab/streaming-llm
- GitHub - mit-han-lab/streaming-llm: Efficient Streaming Language Models with Attention Sinks
-
Streaming LLM โ No limit on context length for your favourite LLM
The authors just uploaded a FAQ sections, which may clarify some of the confusions: https://github.com/mit-han-lab/streaming-llm/blob/main/READM...
-
StreamingLLM โa simple and efficient framework that enables LLMs to handle unlimited texts without fine-tuning
Deploying Large Language Models (LLMs) in streaming applications such as multi-round dialogue, where long interactions are expected, is urgently needed but poses two major challenges. Firstly, during the decoding stage, caching previous tokens' Key and Value states (KV) consumes extensive memory. Secondly, popular LLMs cannot generalize to longer texts than the training sequence length. Window attention, where only the most recent KVs are cached, is a natural approach -- but we show that it fails when the text length surpasses the cache size. We observe an interesting phenomenon, namely attention sink, that keeping the KV of initial tokens will largely recover the performance of window attention. In this paper, we first demonstrate that the emergence of attention sink is due to the strong attention scores towards initial tokens as a "sink'' even if they are not semantically important. Based on the above analysis, we introduce StreamingLLM, an efficient framework that enables LLMs trained with a finite length attention window to generalize to infinite sequence lengths without any fine-tuning. We show that StreamingLLM can enable Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language modeling with up to 4 million tokens and more. In addition, we discover that adding a placeholder token as a dedicated attention sink during pre-training can further improve streaming deployment. In streaming settings, StreamingLLM outperforms the sliding window recomputation baseline by up to 22.2x speedup. Code and datasets are provided in the link.
- StreamingLLM: Efficient streaming technique enable infinite sequence lengths
whisper.cpp
-
Show HN: I created automatic subtitling app to boost short videos
whisper.cpp [1] has a karaoke example that uses ffmpeg's drawtext filter to display rudimentary karaoke-like captions. It also supports diarisation. Perhaps it could be a starting point to create a better script that does what you need.
--
1: https://github.com/ggerganov/whisper.cpp/blob/master/README....
- LLaMA Now Goes Faster on CPUs
-
LLMs on your local Computer (Part 1)
The ggml library is one of the first library for local LLM interference. Itโs a pure C library that converts models to run on several devices, including desktops, laptops, and even mobile device - and therefore, it can also be considered as a tinkering tool, trying new optimizations, that will then be incorporated into other downstream projects. This tool is at the heart of several other projects, powering LLM interference on desktop or even mobile phones. Subprojects for running specific LLMs or LLM families exists, such as whisper.cpp.
-
Voxos.ai โ An Open-Source Desktop Voice Assistant
I'm not sure if it is _fully_ openai compatible, but whispercpp has a server bundled that says it is "OAI-like": https://github.com/ggerganov/whisper.cpp/tree/master/example...
I don't have any direct experience with it... I've only played around with whisper locally, using scripts.
-
Jarvis: A Voice Virtual Assistant in Python (OpenAI, ElevenLabs, Deepgram)
unless i'm misunderstanding `whisper.cpp` seems to support streaming & the repository includes a native example[0] and a WASM example[1] with a demo site[2].
[0]: https://github.com/ggerganov/whisper.cpp/tree/master/example...
- Wchess
-
I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now.
Usage 1: Good to transcribe audio. An example use case could be to summarize YouTube videos or long courses. Usage 2: You talk with voice to your AI that responds with text (later with audio too). - https://github.com/ggerganov/whisper.cpp
-
Scrybble is the ReMarkable highlights to Obsidian exporter I have been looking for
๐ฃ๏ธ๐๏ธ whisper.cpp (offline speech-to-text transcription, models trained by OpenAI, CLI based, browser based)
- Whisper.wasm
-
Whisper C++ not working for me. Anyone else?
Has anyone played around with Whisper C++ for swift? I'm hitting a snag even on the demo. I've downloaded the github repo and everything matches up with this video [ https://youtu.be/b10OHCDHDQ4 ] but when he hits the transcribe button, it actually prints out the captioning. When I do it, it skips that part and just says "Done...". But it, does everything else - plays the audio, says it's transcribing.. just doesn't show me the transcription: and it's not in the debug window either. But the demo isn't throwing any errors, and I haven't messed with the code really so this is their example. https://github.com/ggerganov/whisper.cpp
What are some alternatives?
CTranslate2 - Fast inference engine for Transformer models
faster-whisper - Faster Whisper transcription with CTranslate2
WhisperInput - Offline voice input panel & keyboard with punctuation for Android.
bark - ๐ Text-Prompted Generative Audio Model
project-2501 - Project 2501 is an open-source AI assistant, written in C++.
Whisper - High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
llama.cpp - LLM inference in C/C++
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)