block-recurrent-transformer-pytorch
block-recurrent-transformer-py
block-recurrent-transformer-pytorch | block-recurrent-transformer-py | |
---|---|---|
1 | 1 | |
204 | - | |
- | - | |
5.0 | - | |
10 months ago | - | |
Python | ||
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
block-recurrent-transformer-pytorch
-
From Deep to Long Learning
that line of research is still going. https://github.com/lucidrains/block-recurrent-transformer-py... i think it is worth continuing research on both fronts.
block-recurrent-transformer-py
-
From Deep to Long Learning
that line of research is still going. https://github.com/lucidrains/block-recurrent-transformer-py... i think it is worth continuing research on both fronts.
What are some alternatives?
iris - Transformers are Sample-Efficient World Models. ICLR 2023, notable top 5%.
heinsen_routing - Reference implementation of "An Algorithm for Routing Vectors in Sequences" (Heinsen, 2022) and "An Algorithm for Routing Capsules in All Domains" (Heinsen, 2019), for composing deep neural networks.
flash-attention-jax - Implementation of Flash Attention in Jax
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
musiclm-pytorch - Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch