memorizing-transformers-pytorch
RETRO-pytorch
memorizing-transformers-pytorch | RETRO-pytorch | |
---|---|---|
5 | 2 | |
611 | 830 | |
- | - | |
2.6 | 2.8 | |
10 months ago | 6 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
memorizing-transformers-pytorch
-
What can LLMs never do?
At one point I experimented a little with transformers that had access to external memory searchable via KNN lookups https://github.com/lucidrains/memorizing-transformers-pytorc... or via routed queries with https://github.com/glassroom/heinsen_routing . Both approaches seemed to work for me, but I had to put that work on hold for reasons outside my control.
-
A single API call using almost the whole 32k context window costs around 2$.
There is a GitHub repo https://github.com/lucidrains/memorizing-transformers-pytorch the implementation deviates from the paper slightly, using a hybrid attention across attention logits local and distant (rather than the sigmoid gate setup). It also uses cosine similarity attention (with learned temperature) for the KNN attention layer. There are also some features that are not mentioned in the paper, such as Transformer-XL memories and shifting memories down. There are no easy-to-use Memorizing Transformers implementations yet.
- You’ll be able to run chatgpt on your own device quite easily very soon
-
[R] Memorizing Transformers - Google 2022
Github: https://github.com/lucidrains/memorizing-transformers-pytorch
-
Memorizing Transformers – models that can acquire new knowledge immediately
have an implementation of this over at https://github.com/lucidrains/memorizing-transformers-pytorc..., for any researcher exploring retrieval and memory with attention networks
RETRO-pytorch
-
[D] Any pre trained retrieval based language models available?
There's a Github project that an individual put together based on the RETRO paper. If you checkout the issues list there is some info on work on a pretrained model.
-
[D] Is there an open-source implementation of the Retrieval-Enhanced Transformer (RETRO)?
i'll give it a shot https://github.com/lucidrains/RETRO-pytorch 👍
What are some alternatives?
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
CoCa-pytorch - Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
TorchPQ - Approximate nearest neighbor search with product quantization on GPU in pytorch and cuda
faiss - A library for efficient similarity search and clustering of dense vectors.
deepmind-research - This repository contains implementations and illustrative code to accompany DeepMind publications
retomaton - PyTorch code for the RetoMaton paper: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022)
RetGen
SHREC2023-ANIMAR - Source codes of team TikTorch (1st place solution) for track 2 and 3 of the SHREC2023 Challenge
t5-pytorch - Implementation of Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer in PyTorch.