LLaMA-Adapter
Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters [Moved to: https://github.com/OpenGVLab/LLaMA-Adapter] (by ZrrSkywalker)
alpaca-lora
Instruct-tune LLaMA on consumer hardware (by tloen)
LLaMA-Adapter | alpaca-lora | |
---|---|---|
16 | 107 | |
4,021 | 18,238 | |
- | - | |
9.4 | 3.6 | |
11 months ago | 3 months ago | |
Python | Jupyter Notebook | |
GNU General Public License v3.0 only | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLaMA-Adapter
Posts with mentions or reviews of LLaMA-Adapter.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-09.
- Are you selfhosting a ChatGPT alternative?
-
Best general purpose model for commercial license?
Either LLaMA with Alpaca LoRA 65B, or LLaMA-Adapter-V2-65B chat demo. I haven't seen any tests of the 65B LLaMA-Adapter-V2, but they claim it's as good as ChatGPT when compared using GPT-4.
-
LLaMA-Adapter V2: fine-tuned LLaMA 65B for visual instruction, and LLaMA Chat65B trained with ShareGPT data for chatting. Chat65B model has been released.
Chat65B: https://github.com/ZrrSkywalker/LLaMA-Adapter/tree/main/llama_adapter_v2_chat65b
-
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
How to efficiently transform large language models (LLMs) into instruction followers is recently a popular research direction, while training LLM for multi-modal reasoning remains less explored. Although the recent LLaMA-Adapter demonstrates the potential to handle visual inputs with LLMs, it still cannot generalize well to open-ended visual instructions and lags behind GPT-4. In this paper, we present LLaMA-Adapter V2, a parameter-efficient visual instruction model. Specifically, we first augment LLaMA-Adapter by unlocking more learnable parameters (e.g., norm, bias and scale), which distribute the instruction-following ability across the entire LLaMA model besides adapters. Secondly, we propose an early fusion strategy to feed visual tokens only into the early LLM layers, contributing to better visual knowledge incorporation. Thirdly, a joint training paradigm of image-text pairs and instruction-following data is introduced by optimizing disjoint groups of learnable parameters. This strategy effectively alleviates the interference between the two tasks of image-text alignment and instruction following and achieves strong multi-modal reasoning with only a small-scale image-text and instruction dataset. During inference, we incorporate additional expert models (e.g. captioning/OCR systems) into LLaMA-Adapter to further enhance its image understanding capability without incurring training costs. Compared to the original LLaMA-Adapter, our LLaMA-Adapter V2 can perform open-ended multi-modal instructions by merely introducing 14M parameters over LLaMA. The newly designed framework also exhibits stronger language-only instruction-following capabilities and even excels in chat interactions. Our code and models are available at https://github.com/ZrrSkywalker/LLaMA-Adapter.
- Surpasses ChatGPT on Some Tasks
- [News] This language model surpasses ChatGPT on some prompts
-
Meet LLaMA-Adapter: A Lightweight Adaption Method For Fine-Tuning Instruction-Following LLaMA Models Using 52K Data Provided By Stanford Alpaca
Quick Read: https://www.marktechpost.com/2023/03/31/meet-llama-adapter-a-lightweight-adaption-method-for-fine-tuning-instruction-following-llama-models-using-52k-data-provided-by-stanford-alpaca/ Paper: https://arxiv.org/pdf/2303.16199.pdf Github: https://github.com/ZrrSkywalker/LLaMA-Adapter
- LLaMA-Adapter: Efficient Fine-Tuning of LLaMA
-
[R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Found relevant code at https://github.com/ZrrSkywalker/LLaMA-Adapter + all code implementations here
- You can now fine-tune LLaMA to follow instructions within ONE hour
alpaca-lora
Posts with mentions or reviews of alpaca-lora.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-09-11.
-
How to deal with loss for SFT for CausalLM
Here is a example: https://github.com/tloen/alpaca-lora/blob/main/finetune.py
-
How to Finetune Llama 2: A Beginner's Guide
In this blog post, I want to make it as simple as possible to fine-tune the LLaMA 2 - 7B model, using as little code as possible. We will be using the Alpaca Lora Training script, which automates the process of fine-tuning the model and for GPU we will be using Beam.
-
Fine-tuning LLMs with LoRA: A Gentle Introduction
Implement the code in Llama LoRA repo in a script we can run locally
-
Newbie here - trying to install a Alpaca Lora and hitting an error
Hi all - relatively new to GitHub / programming in general, and I wanted to try to set up Alpaca Lora locally. Following the guide here: https://github.com/tloen/alpaca-lora
-
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
Follow up the popular work of u/tloen alpaca-lora, I wrapped the setup of alpaca_lora_4bit to add support for GPTQ training in form of installable pip packages. You can perform training and inference with multiple quantizations method to compare the results.
- FLaNK Stack Weekly for 20 June 2023
-
Converting to GGML?
If instead you want to apply a LoRa to a pytorch model, a lot of people use this script to apply to LoRa to the 16 bit model and then quantize it with a GPTQ program afterwards https://github.com/tloen/alpaca-lora/blob/main/export_hf_checkpoint.py
-
Simple LLM Watermarking - Open Lllama 3b LORA
There are a few papers on watermarking LLM output, but from what I have seen they all use complex methods of detection to allow the watermark to go unseen by the end user, only to be detected by algorithm. I believe that a more overt system of watermarking might also be beneficial. One simple method that I have tried is character substitution. For this model, I LORA finetuned openlm-research/open_llama_3b on the alpaca_data_cleaned_archive.json dataset from https://github.com/tloen/alpaca-lora/ modified by replacing all instances of the "." character in the outputs with a "ι" The results are pretty good, with the correct the correct substitutions being generated by the model in most cases. It doesn't always work, but this was only a LORA training and for two epochs of 400 steps each, and 100% substitution isn't really required.
-
text-generation-webui's "Train Only After" option
I am kind of new to finetuning LLM's and am not able to understand what this option exactly refers to. I guess it has the same meaning as the "train_on_inputs" parameter of alpacalora though.
-
Learning sources on working with local LLMs
Read the paper and also: https://github.com/tloen/alpaca-lora