Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Streaming-llm Alternatives
Similar projects and alternatives to streaming-llm
-
willow
Open source, local, and self-hosted Amazon Echo/Google Home competitive Voice Assistant alternative
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
distil-whisper
Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
openWakeWord
An open-source audio wake word (or phrase) detection framework with a focus on performance and simplicity.
streaming-llm reviews and mentions
-
[D] In decoder models, if later tokens attend to early tokens but early tokens don't attend to later tokens, what stops the influence of the early tokens from growing with each layer?
Just quickly glanced through the question but you might be interested in the attention sink for example where they use the fact that earlier tokens are overly attended to in general --> paper: https://arxiv.org/abs/2309.17453
-
[D] Why are decoder only models used for autoregressive generation instead of encoder-only models? What value is the causal mask if the new token doesn't exist yet?
That is a great question. I wish I had a mathematical explanation for it, but I can only provide some intuitive "yeah but then again"... Fwiw, there was a paper recently that indeed showed that the first few tokens of any sequence, starting with the special '[START]' token does hold special information (they call it the Attention Sink) compared to all other tokens. Here is a link to that paper:https://arxiv.org/abs/2309.17453
-
Distil-Whisper: distilled version of Whisper that is 6 times faster, 49% smaller
Oh yes, that's absolutely true - faster is better for everyone. It's just that this particular breakpoint would put realtime transcription on a $17 device with an amazing support ecosystem. It's wild.
That being said, even with this distillation there's still the aspect that Whisper isn't really designed for streaming. It's fairly simplistic and always deals with 30 second windows. I was expecting there to have been some sort of useful transform you could do to the model to avoid quite so much reprocessing per frame, but other than https://github.com/mit-han-lab/streaming-llm (which I'm not even sure directly helps) I haven't noticed anything out there.
-
Nvidia Trains LLM on Chip Design
See https://github.com/mit-han-lab/streaming-llm and others. There's good reason to believe that attention networks learn how to update their own weights (Forget the paper) based on their input. The attention mechanism can act like a delta to update weights as the data propagates through the layers. The issue is getting the token embeddings to be more than just the 50k or so that we use for the english language so you can explore the full space, which is what the attention sink mechanism is trying to do.
-
LLMs for the infinite input lengths are here!
📚Research Paper: https://arxiv.org/pdf/2309.17453.pdf 💻 Code: https://github.com/mit-han-lab/streaming-llm
- GitHub - mit-han-lab/streaming-llm: Efficient Streaming Language Models with Attention Sinks
-
Streaming LLM – No limit on context length for your favourite LLM
The authors just uploaded a FAQ sections, which may clarify some of the confusions: https://github.com/mit-han-lab/streaming-llm/blob/main/READM...
-
StreamingLLM —a simple and efficient framework that enables LLMs to handle unlimited texts without fine-tuning
Deploying Large Language Models (LLMs) in streaming applications such as multi-round dialogue, where long interactions are expected, is urgently needed but poses two major challenges. Firstly, during the decoding stage, caching previous tokens' Key and Value states (KV) consumes extensive memory. Secondly, popular LLMs cannot generalize to longer texts than the training sequence length. Window attention, where only the most recent KVs are cached, is a natural approach -- but we show that it fails when the text length surpasses the cache size. We observe an interesting phenomenon, namely attention sink, that keeping the KV of initial tokens will largely recover the performance of window attention. In this paper, we first demonstrate that the emergence of attention sink is due to the strong attention scores towards initial tokens as a "sink'' even if they are not semantically important. Based on the above analysis, we introduce StreamingLLM, an efficient framework that enables LLMs trained with a finite length attention window to generalize to infinite sequence lengths without any fine-tuning. We show that StreamingLLM can enable Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language modeling with up to 4 million tokens and more. In addition, we discover that adding a placeholder token as a dedicated attention sink during pre-training can further improve streaming deployment. In streaming settings, StreamingLLM outperforms the sliding window recomputation baseline by up to 22.2x speedup. Code and datasets are provided in the link.
- StreamingLLM: Efficient streaming technique enable infinite sequence lengths
-
A note from our sponsor - InfluxDB
www.influxdata.com | 12 May 2024
Stats
mit-han-lab/streaming-llm is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of streaming-llm is Python.
Sponsored