LongLoRA
LoftQ
LongLoRA | LoftQ | |
---|---|---|
4 | 2 | |
2,478 | 164 | |
3.8% | - | |
9.1 | 8.5 | |
3 months ago | 19 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LongLoRA
-
Ask HN: AI/ML papers to catch up with current state of AI?
LongAlpaca / One of many ways to extend context, and a useful dataset / https://arxiv.org/abs/2309.12307
-
Aurelian: 70B 32K story-writing (and more) [Alpha]
Finally, LongLORA is a method to reduce the number of computations over a large context, and also specifically train the embed and norm layers fully, that is, no quantization or LORA for those. They are small layers and easy to train without too much VRAM cost, but the LongLORA authors noticed they have a big impact on long context performance. I am not using their computation reduction methods, but I am using their suggestion to train embed/norm layers fully.
-
Why train on Yi 4K instead of 200K?
That used to be true, but things like LongLORA and LongQLoRA demonstrate that you can increase the context length of a foundation model.
-
Using Overfitting to Debug My LLM [P]
For reference, I am using the LongLoRA SFT implementation for fine-tuning a CodeLLaMA model on a code generation instruction. I have also attached my evaluation code below:
LoftQ
-
Aurelian: 70B 32K story-writing (and more) [Alpha]
But the quantization is done before training, and may not be optimal as you train the model. LoftQ is a method to re-compute the quantizations, taking into account the current full model (base model + learned LORA).
- New LoftQ quantization technique outperforms QLora
What are some alternatives?
relora - Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
Zicklein - Finetuning instruct-LLaMA on german datasets.
torch-adapters - Small Library of PyTorch Adaptation modules
discus - A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ
punica - Serving multiple LoRA finetuned LLM as one
RingAttention - Transformers with Arbitrarily Large Context
xTuring - Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
llama-peft-tuner - Tune LLaMa-7B on Alpaca Dataset using PEFT / LORA Based on @zphang's https://github.com/zphang/minimal-llama scripts.
LLM-Finetuning-Hub - Toolkit for fine-tuning, ablating and unit-testing open-source LLMs. [Moved to: https://github.com/georgian-io/LLM-Finetuning-Toolkit]