flash-attention-jax
block-recurrent-transformer-pytorch
Our great sponsors
flash-attention-jax | block-recurrent-transformer-pytorch | |
---|---|---|
1 | 1 | |
175 | 203 | |
- | - | |
2.0 | 5.0 | |
about 2 months ago | 10 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
flash-attention-jax
-
[D] Most important AI PaperĀ“s this year so far in my opinion + Proto AGI speculation at the end
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness Paper: https://arxiv.org/abs/2205.14135 Github: https://github.com/HazyResearch/flash-attention and https://github.com/lucidrains/flash-attention-jax
block-recurrent-transformer-pytorch
-
From Deep to Long Learning
that line of research is still going. https://github.com/lucidrains/block-recurrent-transformer-py... i think it is worth continuing research on both fronts.
What are some alternatives?
msn - Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)
iris - Transformers are Sample-Efficient World Models. ICLR 2023, notable top 5%.
EfficientZero - Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021.
block-recurrent-transformer-py
flash-attention - Fast and memory-efficient exact attention
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
RHO-Loss
heinsen_routing - Reference implementation of "An Algorithm for Routing Vectors in Sequences" (Heinsen, 2022) and "An Algorithm for Routing Capsules in All Domains" (Heinsen, 2019), for composing deep neural networks.
CodeRL - This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (NeurIPS22).
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
perceiver-ar
musiclm-pytorch - Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch