SaaSHub helps you find the best software and product alternatives Learn more →
Heinsen_routing Alternatives
Similar projects and alternatives to heinsen_routing
-
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
-
-
-
recurrent-memory-transformer
[NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.
-
-
memorizing-transformers-pytorch
Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate nearest neighbors, in Pytorch
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
-
-
HMT-pytorch
Official Implementation of "HMT: Hierarchical Memory Transformer for Long Context Language Processing"
heinsen_routing discussion
heinsen_routing reviews and mentions
-
HMT: Hierarchical Memory Transformer for Long Context Language Processing
Code: https://github.com/OswaldHe/HMT-pytorch
This looks really interesting. I've the paper to my reading list and look forward to playing with the code. I'm curious to see what kinds of improvements we can get by agumenting Transformers and other generative language/sequence models with this and other mechanisms implementing hierarchical memory.[a]
We sure live in interesting times!
---
[a] In the past, I experimented a little with transformers that had access to external memory using https://github.com/lucidrains/memorizing-transformers-pytorc... and also using routed queries with https://github.com/glassroom/heinsen_routing . Both approaches seemed to work, but I never attempted to build any kind of hierarchy with those approaches.
-
What can LLMs never do?
At one point I experimented a little with transformers that had access to external memory searchable via KNN lookups https://github.com/lucidrains/memorizing-transformers-pytorc... or via routed queries with https://github.com/glassroom/heinsen_routing . Both approaches seemed to work for me, but I had to put that work on hold for reasons outside my control.
-
A Surprisingly Effective Way to Estimate Token Importance in LLM Prompts
Simple and in hindsight, obvious:
1. Run the text through a document embeddding model and save the embedding.
2. Remove one token at a time, and compute the cosine similarity of the new document embedding to the original one.
3. Compute importance as a function of the change in cosine similarity.
Nice.
Also check out https://github.com/glassroom/heinsen_routing . It takes n embeddings and outputs m embeddings, and also gives you an n×m matrix with credit assignments, without having to remove tokens one by one, which can be prohibitively slow for long texts.
-
Unlimiformer: Long-Range Transformers with Unlimited Length Input
After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.
I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.
-
Scaling Transformer to 1M tokens and beyond with RMT
Here's a list of tools for scaling up transformer context that have github repos:
* FlashAttention: In my experience, the current best solution for n² attention, but it's very hard to scale it beyond the low tens of thousands of tokens. Code: https://github.com/HazyResearch/flash-attention
* Heinsen Routing: In my experience, the current best solution for n×m attention. I've used it to pull up more than a million tokens as context. It's not a substitute for n² attention. Code: https://github.com/glassroom/heinsen_routing
* RWKV: A sort-of-recurrent model which claims to have performance comparable to n² attention in transformers. In my limited experience, it doesn't. Others agree: https://twitter.com/arankomatsuzaki/status/16390003799784038... . Code: https://github.com/BlinkDL/RWKV-LM
* RMT (this method): I'm skeptical that the recurrent connections will work as well as n² attention in practice, but I'm going to give it a try. Code: https://github.com/booydar/t5-experiments/tree/scaling-repor...
In addition, there's a group at Stanford working on state-space models that looks promising to me. The idea is to approximate n² attention dynamically using only O(n log n) compute. There's no code available, but here's a blog post about it: https://hazyresearch.stanford.edu/blog/2023-03-27-long-learn...
If anyone here has other suggestions for working with long sequences (hundreds of thousands to millions of tokens), I'd love to learn about them.
-
From Deep to Long Learning
I imagine you could, maybe by using something like this https://github.com/glassroom/heinsen_routing#sequence-to-vec... ... but I doubt you'd be able to match the training efficiency of triangular masking in auto-regressive transformers. With routing, you'd have to train the model one time-step at a time, instead of all time-steps in parallel like a masked auto-regressive transformer.
- New algorithm can route sequences with 1M+ token embeddings in one GPU
-
A note from our sponsor - SaaSHub
www.saashub.com | 13 Dec 2024
Stats
glassroom/heinsen_routing is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of heinsen_routing is Python.
Popular Comparisons
- heinsen_routing VS RWKV-LM
- heinsen_routing VS safari
- heinsen_routing VS iris
- heinsen_routing VS block-recurrent-transformer-pytorch
- heinsen_routing VS flash-attention
- heinsen_routing VS recurrent-memory-transformer
- heinsen_routing VS block-recurrent-transformer-py
- heinsen_routing VS TruthfulQA
- heinsen_routing VS memorizing-transformers-pytorc
- heinsen_routing VS memorizing-transformers-pytorch