alpaca-lora
stanford_alpaca
Our great sponsors
alpaca-lora | stanford_alpaca | |
---|---|---|
107 | 108 | |
18,073 | 28,602 | |
- | 1.4% | |
3.6 | 2.0 | |
about 1 month ago | 17 days ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
alpaca-lora
-
How to Finetune Llama 2: A Beginner's Guide
In this blog post, I want to make it as simple as possible to fine-tune the LLaMA 2 - 7B model, using as little code as possible. We will be using the Alpaca Lora Training script, which automates the process of fine-tuning the model and for GPU we will be using Beam.
-
Fine-tuning LLMs with LoRA: A Gentle Introduction
Implement the code in Llama LoRA repo in a script we can run locally
-
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
Follow up the popular work of u/tloen alpaca-lora, I wrapped the setup of alpaca_lora_4bit to add support for GPTQ training in form of installable pip packages. You can perform training and inference with multiple quantizations method to compare the results.
- FLaNK Stack Weekly for 20 June 2023
-
Learning sources on working with local LLMs
Read the paper and also: https://github.com/tloen/alpaca-lora
-
Oobabooga for Windows
Guide and Alpaca-Lora
- samantha-7b
- Creating a LoRA from unstructured text
-
With a single 3090, which model is finetune-able and decent reasoning ability
Well, I've not gone through the whole process till the end yet, but using instructions from https://github.com/tloen/alpaca-lora I was able just now to start a fine-tuning process on a LLaMA 13B model, it says it will take 15 hours.
-
[D] An ELI5 explanation for LoRA - Low-Rank Adaptation.
Repos like https://github.com/tloen/alpaca-lora and https://github.com/Lightning-AI/lit-llama use LoRA as a method to fine-tune LLaMA models.
stanford_alpaca
-
How Open is Generative AI? Part 2
Alpaca is an instruction-oriented LLM derived from LLaMA, enhanced by Stanford researchers with a dataset of 52,000 examples of following instructions, sourced from OpenAI’s InstructGPT through the self-instruct method. The extensive self-instruct dataset, details of data generation, and the model refinement code were publicly disclosed. This model complies with the licensing requirements of its base model. Due to the utilization of InstructGPT for data generation, it also adheres to OpenAI’s usage terms, which prohibit the creation of models competing with OpenAI. This illustrates how dataset restrictions can indirectly affect the resulting fine-tuned model.
- Ask HN: AI/ML papers to catch up with current state of AI?
-
Fine-tuning LLMs with LoRA: A Gentle Introduction
In this article, we're going to experiment with LoRA and fine-tune Llama Alpaca using commercial hardware.
-
Creating a new Finetuned model
Most papers I did read showed at least a thousand, even 10000 at several cases, so I assumed that to be the trend in the case of Low rank adapter(PEFT) training.(source: [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs (arxiv.org) , Stanford CRFM (Alpaca) and the minimum being openchat/openchat · Hugging Face ; There are a lot more examples)
- Bye bye Bing
-
The idea maze for AI startups (2015)
I think there's a new approach for “How do you get the data?” that wasn't available when this article was written in 2015. The new text and image generative models can now be used to synthesize training datasets.
I was working on an typing autocorrect project and needed a corpus of "text messages". Most of the traditional NLP corpuses like those available through NLTK [0] aren't suitable. But it was easy to script ChatGPT to generate thousands of believable text messages by throwing random topics at it.
Similarly, you can synthesize a training dataset by giving GPT the outputs/labels and asking it to generate a variety of inputs. For sentiment analysis... "Give me 1000 negative movie reviews" and "Now give me 1000 positive movie reviews".
The Alpaca folks used GPT-3 to generate high-quality instruction-following datasets [1] based on a small set of human samples.
Etc.
-
[D] High-quality, open-source implementations of LLMs
Alpaca [GitHub]
-
please 0.1.0 released: let GPT-4 remember CLI args
Now if only this could be used offline, eg. with alpaca https://github.com/tatsu-lab/stanford_alpaca
-
Is there a Chatgpt (or other LLMs) powered application in the field of cybersecurity/privacy for end users/b2c?
If you have a strong enough computer, there is Alpaca and llama.cpp which are both open-source. They also have the best privacy feature of all: to be able to be ran locally offline on your computer. I believe there are more foss LLMs out there too but idr.
-
Does ChatGPT suck at programming for everyone or just for me?
Are you aware that you can run a pretrained LLM on just 8gb of ram with a single x86 cpu?
What are some alternatives?
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
llama.cpp - LLM inference in C/C++
gpt4all - gpt4all: run open-source LLMs anywhere
llama - Inference code for Llama models
ggml - Tensor library for machine learning
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
dalai - The simplest way to run LLaMA on your local machine
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI