flash-attention-jax VS block-recurrent-transformer-pytorch

Compare flash-attention-jax vs block-recurrent-transformer-pytorch and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
flash-attention-jax block-recurrent-transformer-pytorch
1 1
175 203
- -
2.0 5.0
about 2 months ago 10 months ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

flash-attention-jax

Posts with mentions or reviews of flash-attention-jax. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-08-14.

block-recurrent-transformer-pytorch

Posts with mentions or reviews of block-recurrent-transformer-pytorch. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-09.
  • From Deep to Long Learning
    6 projects | news.ycombinator.com | 9 Apr 2023
    that line of research is still going. https://github.com/lucidrains/block-recurrent-transformer-py... i think it is worth continuing research on both fronts.

What are some alternatives?

When comparing flash-attention-jax and block-recurrent-transformer-pytorch you can also consider the following projects:

msn - Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)

iris - Transformers are Sample-Efficient World Models. ICLR 2023, notable top 5%.

EfficientZero - Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021.

block-recurrent-transformer-py

flash-attention - Fast and memory-efficient exact attention

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

RHO-Loss

heinsen_routing - Reference implementation of "An Algorithm for Routing Vectors in Sequences" (Heinsen, 2022) and "An Algorithm for Routing Capsules in All Domains" (Heinsen, 2019), for composing deep neural networks.

CodeRL - This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (NeurIPS22).

PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM

perceiver-ar

musiclm-pytorch - Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch