memorizing-transformers-pytorch
Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate nearest neighbors, in Pytorch (by lucidrains)
memorizing-transformers-pytorc
By lucidrains
memorizing-transformers-pytorch | memorizing-transformers-pytorc | |
---|---|---|
5 | 2 | |
611 | - | |
- | - | |
2.6 | - | |
10 months ago | - | |
Python | ||
MIT License | - |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
memorizing-transformers-pytorch
Posts with mentions or reviews of memorizing-transformers-pytorch.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-04-27.
-
What can LLMs never do?
At one point I experimented a little with transformers that had access to external memory searchable via KNN lookups https://github.com/lucidrains/memorizing-transformers-pytorc... or via routed queries with https://github.com/glassroom/heinsen_routing . Both approaches seemed to work for me, but I had to put that work on hold for reasons outside my control.
-
A single API call using almost the whole 32k context window costs around 2$.
There is a GitHub repo https://github.com/lucidrains/memorizing-transformers-pytorch the implementation deviates from the paper slightly, using a hybrid attention across attention logits local and distant (rather than the sigmoid gate setup). It also uses cosine similarity attention (with learned temperature) for the KNN attention layer. There are also some features that are not mentioned in the paper, such as Transformer-XL memories and shifting memories down. There are no easy-to-use Memorizing Transformers implementations yet.
- You’ll be able to run chatgpt on your own device quite easily very soon
-
[R] Memorizing Transformers - Google 2022
Github: https://github.com/lucidrains/memorizing-transformers-pytorch
-
Memorizing Transformers – models that can acquire new knowledge immediately
have an implementation of this over at https://github.com/lucidrains/memorizing-transformers-pytorc..., for any researcher exploring retrieval and memory with attention networks
memorizing-transformers-pytorc
Posts with mentions or reviews of memorizing-transformers-pytorc.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-04-27.
-
What can LLMs never do?
At one point I experimented a little with transformers that had access to external memory searchable via KNN lookups https://github.com/lucidrains/memorizing-transformers-pytorc... or via routed queries with https://github.com/glassroom/heinsen_routing . Both approaches seemed to work for me, but I had to put that work on hold for reasons outside my control.
-
Memorizing Transformers – models that can acquire new knowledge immediately
have an implementation of this over at https://github.com/lucidrains/memorizing-transformers-pytorc..., for any researcher exploring retrieval and memory with attention networks
What are some alternatives?
When comparing memorizing-transformers-pytorch and memorizing-transformers-pytorc you can also consider the following projects:
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch