LongLoRA
LLM-Finetuning-Hub
LongLoRA | LLM-Finetuning-Hub | |
---|---|---|
4 | 6 | |
2,473 | 638 | |
3.6% | - | |
9.1 | 9.5 | |
3 months ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LongLoRA
-
Ask HN: AI/ML papers to catch up with current state of AI?
LongAlpaca / One of many ways to extend context, and a useful dataset / https://arxiv.org/abs/2309.12307
-
Aurelian: 70B 32K story-writing (and more) [Alpha]
Finally, LongLORA is a method to reduce the number of computations over a large context, and also specifically train the embed and norm layers fully, that is, no quantization or LORA for those. They are small layers and easy to train without too much VRAM cost, but the LongLORA authors noticed they have a big impact on long context performance. I am not using their computation reduction methods, but I am using their suggestion to train embed/norm layers fully.
-
Why train on Yi 4K instead of 200K?
That used to be true, but things like LongLORA and LongQLoRA demonstrate that you can increase the context length of a foundation model.
-
Using Overfitting to Debug My LLM [P]
For reference, I am using the LongLoRA SFT implementation for fine-tuning a CodeLLaMA model on a code generation instruction. I have also attached my evaluation code below:
LLM-Finetuning-Hub
-
Zephyr-7B QLoRA Benchmark for Summarization and Classification
Hi everyone, we've been working on benchmarking different open-source LLMs. We measure, in particular, on the performance of these models once finetued (via QLoRA) on classic NLP downstream tasks like summarization and classification. We also put particular emphasis on benchmarking inference time/cost for these models once deployed.
-
Show HN: Finetuning LLMs: Open-source vs. Close-source
Hello all,
I have been working on benchmarking different LLMs -- both open-source and closed-source.
Repo: https://github.com/georgian-io/LLM-Finetuning-Hub
Precisely, I am comparing their out-of-the-box capabilities (prompting) and their fine-tuned conterparts!
So far, the following models have been benchmarked:
Open-Source:
- FLaNK Stack Weekly for 12 September 2023
- [P][R] Finetune LLMs via the Finetuning Hub
-
Show HN: Leverage Falcon 7B blog post
- Finetuning with QLoRA
I evaluate how Falcon does on classification tasks when compared to Bert and Distilbert.
Moreover, I talk about different ways you can deploy the model, and the associated costs!
The code for all of my experiments are available on: https://github.com/georgian-io/LLM-Finetuning-Hub
Happy reading and learning!
- Show HN: LLM Finetuning Hub
What are some alternatives?
relora - Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
ChatDev - Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
Zicklein - Finetuning instruct-LLaMA on german datasets.
bedframe - Your Browser Extension Development Framework
torch-adapters - Small Library of PyTorch Adaptation modules
wasmer-java - ☕ WebAssembly runtime for Java
discus - A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ
llm-toys - Small(7B and below) finetuned LLMs for a diverse set of useful tasks
punica - Serving multiple LoRA finetuned LLM as one
go-llama2 - Llama 2 inference in one file of pure Go
RingAttention - Transformers with Arbitrarily Large Context
sqllineage - SQL Lineage Analysis Tool powered by Python