recurrent-memory-transformer

[NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture. (by booydar)

Recurrent-memory-transformer Alternatives

Similar projects and alternatives to recurrent-memory-transformer

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better recurrent-memory-transformer alternative or higher similarity.

recurrent-memory-transformer reviews and mentions

Posts with mentions or reviews of recurrent-memory-transformer. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-24.
  • Scaling Transformer to 1M tokens and beyond with RMT
    1 project | /r/singularity | 25 Apr 2023
    i find the github link https://github.com/booydar/t5-experiments/tree/scaling-report
    6 projects | news.ycombinator.com | 23 Apr 2023
    Here's a list of tools for scaling up transformer context that have github repos:

    * FlashAttention: In my experience, the current best solution for n² attention, but it's very hard to scale it beyond the low tens of thousands of tokens. Code: https://github.com/HazyResearch/flash-attention

    * Heinsen Routing: In my experience, the current best solution for n×m attention. I've used it to pull up more than a million tokens as context. It's not a substitute for n² attention. Code: https://github.com/glassroom/heinsen_routing

    * RWKV: A sort-of-recurrent model which claims to have performance comparable to n² attention in transformers. In my limited experience, it doesn't. Others agree: https://twitter.com/arankomatsuzaki/status/16390003799784038... . Code: https://github.com/BlinkDL/RWKV-LM

    * RMT (this method): I'm skeptical that the recurrent connections will work as well as n² attention in practice, but I'm going to give it a try. Code: https://github.com/booydar/t5-experiments/tree/scaling-repor...

    In addition, there's a group at Stanford working on state-space models that looks promising to me. The idea is to approximate n² attention dynamically using only O(n log n) compute. There's no code available, but here's a blog post about it: https://hazyresearch.stanford.edu/blog/2023-03-27-long-learn...

    If anyone here has other suggestions for working with long sequences (hundreds of thousands to millions of tokens), I'd love to learn about them.

    1 project | /r/mlscaling | 23 Apr 2023
    Checking the actual results: https://github.com/booydar/t5-experiments/blob/a6c478754530cdee2a67974e44a0c1b6dbad92c4/results/babilong.csv, I think it's cute, but not a real breakthrough.
  • Code for Scaling Transformer to 1M tokens and beyond with RMT (arxiv.org)
    4 projects | news.ycombinator.com | 24 Apr 2023
    As all...

    https://github.com/booydar/t5-experiments/tree/scaling-repor...

  • A note from our sponsor - InfluxDB
    www.influxdata.com | 29 Apr 2024
    Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →

Stats

Basic recurrent-memory-transformer repo stats
7
738
6.6
18 days ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com