LongLoRA
mamba
LongLoRA | mamba | |
---|---|---|
4 | 15 | |
2,478 | 10,002 | |
3.8% | 19.5% | |
9.1 | 8.1 | |
3 months ago | 4 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LongLoRA
-
Ask HN: AI/ML papers to catch up with current state of AI?
LongAlpaca / One of many ways to extend context, and a useful dataset / https://arxiv.org/abs/2309.12307
-
Aurelian: 70B 32K story-writing (and more) [Alpha]
Finally, LongLORA is a method to reduce the number of computations over a large context, and also specifically train the embed and norm layers fully, that is, no quantization or LORA for those. They are small layers and easy to train without too much VRAM cost, but the LongLORA authors noticed they have a big impact on long context performance. I am not using their computation reduction methods, but I am using their suggestion to train embed/norm layers fully.
-
Why train on Yi 4K instead of 200K?
That used to be true, but things like LongLORA and LongQLoRA demonstrate that you can increase the context length of a foundation model.
-
Using Overfitting to Debug My LLM [P]
For reference, I am using the LongLoRA SFT implementation for fine-tuning a CodeLLaMA model on a code generation instruction. I have also attached my evaluation code below:
mamba
-
Based: Simple linear attention language models
> how the recall can grow unbounded with no tradeoff
this? https://github.com/state-spaces/mamba/issues/175
-
Mamba: The Easy Way
If you want to learn this stuff as a computer engineer, you can read the code here [0]. I find the math quite helpful.
[0]: https://github.com/state-spaces/mamba
- FLaNK Stack 05 Feb 2024
- Introduction to State Space Models (SSM)
-
Fortran inference code for the Mamba state space language model
This model was discussed recently: https://news.ycombinator.com/item?id=38522428 It's a new kind of ML model architecture that can be used instead of a transformer in LLMs.
See also the original repo from the paper: https://github.com/state-spaces/mamba
-
Mamba outperforms transformers "everywhere we tried"
[2] - https://github.com/state-spaces/mamba
Out of curiosity, does anyone feel as though there's any benefit to linking to reddit when we can link to whatever the link is? I for one do not click the link and read discussion on reddit - if I wanted that sort of discussion, I would browse there, not HN.
- GitHub – State-Spaces/Mamba
-
Generate valid JSON with Mamba models
The library is compatible with any auto-regressive model, not transformers. To prove our point we integrated Mamba, a new state-space model architecture, to the library. Try it out!
-
[D] Thoughts on Mamba?
I ran the NanoGPT of Karparthy replacing Self-Attention with Mamba on his TinyShakespeare Dataset and within 5 minutes it started spitting out the following:
-
Mamba-Chat: A Chat LLM based on State Space Models
You might have come across the paper Mamba paper in the last days, which was the first attempt at scaling up state space models to 2.8B parameters to work on language data.
What are some alternatives?
relora - Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
miniforge - A conda-forge distribution.
Zicklein - Finetuning instruct-LLaMA on german datasets.
pip - The Python package installer
torch-adapters - Small Library of PyTorch Adaptation modules
llm.f90 - LLM inference in Fortran
discus - A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ
conda - A system-level, binary package and environment manager running on all major operating systems and platforms.
punica - Serving multiple LoRA finetuned LLM as one
mamba-chat - Mamba-Chat: A chat LLM based on the state-space model architecture 🐍
RingAttention - Transformers with Arbitrarily Large Context
spack - A flexible package manager that supports multiple versions, configurations, platforms, and compilers.