block-recurrent-transformer-pytorch VS flash-attention-jax

Compare block-recurrent-transformer-pytorch vs flash-attention-jax and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
block-recurrent-transformer-pytorch flash-attention-jax
1 1
204 175
- -
5.0 2.0
10 months ago 2 months ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

block-recurrent-transformer-pytorch

Posts with mentions or reviews of block-recurrent-transformer-pytorch. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-09.
  • From Deep to Long Learning
    6 projects | news.ycombinator.com | 9 Apr 2023
    that line of research is still going. https://github.com/lucidrains/block-recurrent-transformer-py... i think it is worth continuing research on both fronts.

flash-attention-jax

Posts with mentions or reviews of flash-attention-jax. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-08-14.

What are some alternatives?

When comparing block-recurrent-transformer-pytorch and flash-attention-jax you can also consider the following projects:

iris - Transformers are Sample-Efficient World Models. ICLR 2023, notable top 5%.

msn - Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)

block-recurrent-transformer-py

EfficientZero - Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021.

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

flash-attention - Fast and memory-efficient exact attention