memorizing-transformers-pytorch
flamingo-pytorch
memorizing-transformers-pytorch | flamingo-pytorch | |
---|---|---|
5 | 3 | |
611 | 1,134 | |
- | - | |
2.6 | 0.0 | |
10 months ago | over 1 year ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
memorizing-transformers-pytorch
-
What can LLMs never do?
At one point I experimented a little with transformers that had access to external memory searchable via KNN lookups https://github.com/lucidrains/memorizing-transformers-pytorc... or via routed queries with https://github.com/glassroom/heinsen_routing . Both approaches seemed to work for me, but I had to put that work on hold for reasons outside my control.
-
A single API call using almost the whole 32k context window costs around 2$.
There is a GitHub repo https://github.com/lucidrains/memorizing-transformers-pytorch the implementation deviates from the paper slightly, using a hybrid attention across attention logits local and distant (rather than the sigmoid gate setup). It also uses cosine similarity attention (with learned temperature) for the KNN attention layer. There are also some features that are not mentioned in the paper, such as Transformer-XL memories and shifting memories down. There are no easy-to-use Memorizing Transformers implementations yet.
- You’ll be able to run chatgpt on your own device quite easily very soon
-
[R] Memorizing Transformers - Google 2022
Github: https://github.com/lucidrains/memorizing-transformers-pytorch
-
Memorizing Transformers – models that can acquire new knowledge immediately
have an implementation of this over at https://github.com/lucidrains/memorizing-transformers-pytorc..., for any researcher exploring retrieval and memory with attention networks
flamingo-pytorch
- flamingo tackling visual understanding
- GitHub - lucidrains/flamingo-pytorch: Implementation of Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
-
[R] Flamingo: a Visual Language Model for Few-Shot Learning (from DeepMind)
code, for anyone interested https://github.com/lucidrains/flamingo-pytorch
What are some alternatives?
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
x-transformers - A simple but complete full-attention transformer with a set of promising experimental features from various papers
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
nuwa-pytorch - Implementation of NÃœWA, state of the art attention network for text to video synthesis, in Pytorch
slot-attention - Implementation of Slot Attention from GoogleAI
t5-pytorch - Implementation of Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer in PyTorch.
soundstorm-pytorch - Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch
spin-model-transformers - Physics-inspired transformer modules based on mean-field dynamics of vector-spin models in JAX