Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Recurrent-memory-transformer Alternatives
Similar projects and alternatives to recurrent-memory-transformer
-
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
heinsen_routing
Reference implementation of "An Algorithm for Routing Vectors in Sequences" (Heinsen, 2022) and "An Algorithm for Routing Capsules in All Domains" (Heinsen, 2019), for composing deep neural networks.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
recurrent-memory-transformer reviews and mentions
-
Scaling Transformer to 1M tokens and beyond with RMT
i find the github link https://github.com/booydar/t5-experiments/tree/scaling-report
Here's a list of tools for scaling up transformer context that have github repos:
* FlashAttention: In my experience, the current best solution for n² attention, but it's very hard to scale it beyond the low tens of thousands of tokens. Code: https://github.com/HazyResearch/flash-attention
* Heinsen Routing: In my experience, the current best solution for n×m attention. I've used it to pull up more than a million tokens as context. It's not a substitute for n² attention. Code: https://github.com/glassroom/heinsen_routing
* RWKV: A sort-of-recurrent model which claims to have performance comparable to n² attention in transformers. In my limited experience, it doesn't. Others agree: https://twitter.com/arankomatsuzaki/status/16390003799784038... . Code: https://github.com/BlinkDL/RWKV-LM
* RMT (this method): I'm skeptical that the recurrent connections will work as well as n² attention in practice, but I'm going to give it a try. Code: https://github.com/booydar/t5-experiments/tree/scaling-repor...
In addition, there's a group at Stanford working on state-space models that looks promising to me. The idea is to approximate n² attention dynamically using only O(n log n) compute. There's no code available, but here's a blog post about it: https://hazyresearch.stanford.edu/blog/2023-03-27-long-learn...
If anyone here has other suggestions for working with long sequences (hundreds of thousands to millions of tokens), I'd love to learn about them.
Checking the actual results: https://github.com/booydar/t5-experiments/blob/a6c478754530cdee2a67974e44a0c1b6dbad92c4/results/babilong.csv, I think it's cute, but not a real breakthrough.
-
Code for Scaling Transformer to 1M tokens and beyond with RMT (arxiv.org)
As all...
https://github.com/booydar/t5-experiments/tree/scaling-repor...
-
A note from our sponsor - InfluxDB
www.influxdata.com | 29 Apr 2024
Stats
The primary programming language of recurrent-memory-transformer is Jupyter Notebook.
Sponsored