TruthfulQA VS recurrent-memory-transformer

Compare TruthfulQA vs recurrent-memory-transformer and see what are their differences.

TruthfulQA

TruthfulQA: Measuring How Models Imitate Human Falsehoods (by sylinrl)

recurrent-memory-transformer

[NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture. (by booydar)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
TruthfulQA recurrent-memory-transformer
4 7
508 742
- -
2.8 5.9
6 months ago 10 days ago
Jupyter Notebook Jupyter Notebook
Apache License 2.0 -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

TruthfulQA

Posts with mentions or reviews of TruthfulQA. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-04.

recurrent-memory-transformer

Posts with mentions or reviews of recurrent-memory-transformer. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-24.
  • Scaling Transformer to 1M tokens and beyond with RMT
    1 project | /r/singularity | 25 Apr 2023
    i find the github link https://github.com/booydar/t5-experiments/tree/scaling-report
    6 projects | news.ycombinator.com | 23 Apr 2023
    Here's a list of tools for scaling up transformer context that have github repos:

    * FlashAttention: In my experience, the current best solution for n² attention, but it's very hard to scale it beyond the low tens of thousands of tokens. Code: https://github.com/HazyResearch/flash-attention

    * Heinsen Routing: In my experience, the current best solution for n×m attention. I've used it to pull up more than a million tokens as context. It's not a substitute for n² attention. Code: https://github.com/glassroom/heinsen_routing

    * RWKV: A sort-of-recurrent model which claims to have performance comparable to n² attention in transformers. In my limited experience, it doesn't. Others agree: https://twitter.com/arankomatsuzaki/status/16390003799784038... . Code: https://github.com/BlinkDL/RWKV-LM

    * RMT (this method): I'm skeptical that the recurrent connections will work as well as n² attention in practice, but I'm going to give it a try. Code: https://github.com/booydar/t5-experiments/tree/scaling-repor...

    In addition, there's a group at Stanford working on state-space models that looks promising to me. The idea is to approximate n² attention dynamically using only O(n log n) compute. There's no code available, but here's a blog post about it: https://hazyresearch.stanford.edu/blog/2023-03-27-long-learn...

    If anyone here has other suggestions for working with long sequences (hundreds of thousands to millions of tokens), I'd love to learn about them.

    1 project | /r/mlscaling | 23 Apr 2023
    Checking the actual results: https://github.com/booydar/t5-experiments/blob/a6c478754530cdee2a67974e44a0c1b6dbad92c4/results/babilong.csv, I think it's cute, but not a real breakthrough.
  • Code for Scaling Transformer to 1M tokens and beyond with RMT (arxiv.org)
    4 projects | news.ycombinator.com | 24 Apr 2023
    As all...

    https://github.com/booydar/t5-experiments/tree/scaling-repor...

What are some alternatives?

When comparing TruthfulQA and recurrent-memory-transformer you can also consider the following projects:

safari - Convolutions for Sequence Modeling

auto-evaluator

flash-attention - Fast and memory-efficient exact attention

heinsen_routing - Reference implementation of "An Algorithm for Routing Vectors in Sequences" (Heinsen, 2022) and "An Algorithm for Routing Capsules in All Domains" (Heinsen, 2019), for composing deep neural networks.

JARVIS - JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf