landmark-attention
Landmark Attention: Random-Access Infinite Context Length for Transformers (by epfml)
landmark-attention-qlora
Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA (by eugenepentland)
landmark-attention | landmark-attention-qlora | |
---|---|---|
13 | 3 | |
390 | 124 | |
1.0% | - | |
5.4 | 5.6 | |
5 months ago | 11 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
landmark-attention
Posts with mentions or reviews of landmark-attention.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-21.
-
LLMs use a surprisingly simple mechanism to retrieve some stored knowledge
It indeed is. An attention mechanism's key and value matrices grow linearly with context length. With PagedAttention[1], we could imagine an external service providing context. The hard part is the how, of course. We can't load our entire database in every conversation, and I suspect there are also training challenges (perhaps addressed via LandmarkAttention[2] and other mechanisms to efficiently retrieve additional key-value matrices.
To manage 20-50 tokens/sec, must arrive within 50-20ms. Pausing the autoregressive transformer when it creates a Q vector stalls the batch, so we need a way to predict queries _ahead_ of where they'd be useful.
[1] https://arxiv.org/abs/2309.06180
[2] https://arxiv.org/abs/2305.16300
- Which are the best LLMs that can explain code?
-
Landmark Attention Oobabooga Support + GPTQ Quantized Models!
Thanks again to the team who worked on the original landmark paper for making this possible! https://github.com/epfml/landmark-attention They made an update to the repo and the code I wrote 4 days ago is now marked legacy so I'm in the process of updating it again...
- New OpenAI update: lowered pricing and a new 16k context version of GPT-3.5
-
Context tokens are the bane of all fun.
Implementing other solutions such as Landmark Attention to allow for much larger context windows. Landmark Attention basically creates new 'landmark tokens' that represent larger chunks of input tokens, and the language model is fine-tuned to allow the attention layer to access the relevant landmark tokens, effectively overcoming context window issues with our relying on external retrieval processes like LangChain.
-
"Today, the diff weights for LLaMA 7B were published which enable it to support context sizes of up to 32k"
Links: https://arxiv.org/abs/2305.16300 https://huggingface.co/epfml/landmark-attention-llama7b-wdiff https://github.com/epfml/landmark-attention
-
The weight diffs for 32K context length LLaMA 7B trained with landmark attention have been released
Paper: https://arxiv.org/abs/2305.16300
- [N] (Update: Code Released) Landmark Attention: Random-Access Infinite Context Length for Transformers
- (Code Released) Landmark Attention: Random-Access Infinite Context Length for Transformers
-
Landmark Attention: Random-Access Infinite Context Length for Transformers
The link to the repo (https://github.com/epfml/landmark-attention) leads to "we'll publish something later".
landmark-attention-qlora
Posts with mentions or reviews of landmark-attention-qlora.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-21.
- Which are the best LLMs that can explain code?
-
Landmark Attention Oobabooga Support + GPTQ Quantized Models!
We need more effort put into properly evaluating these models. It is still very early days and we are looking for some feedback on their performance/any issues you run into. Please feel free to chat with us! You can find a link in my QLoRA Repo. https://github.com/eugenepentland/landmark-attention-qlora
-
Landmark attention models released, claim to get up to 32k context on 7B llama models, 5K on 13B
Github link: https://github.com/eugenepentland/landmark-attention-qlora
What are some alternatives?
When comparing landmark-attention and landmark-attention-qlora you can also consider the following projects:
can-ai-code - Self-evaluating interview for AI coders
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.