codealpaca
alpaca_lora_4bit
codealpaca | alpaca_lora_4bit | |
---|---|---|
20 | 41 | |
1,373 | 528 | |
- | - | |
4.4 | 8.6 | |
12 months ago | 5 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
codealpaca
-
Just put together a programming performance ranking for popular LLaMAs using the HumanEval+ Benchmark!
CodeAlpaca 7B
-
OpenAI isn’t doing enough to make ChatGPT’s limitations clear
This is great!
Addressing the model limitations a bit: in the demonstration data that is provided to the base model, we should prevent computed or "looked up" answers.
I've seen some of the demonstration data that people are using to train instruction-tuned models and are being taught to respond by making up answers to solutions it shouldn't try to compute. Btw, the output is wrong.
{ "instruction": "What would be the output of the following JavaScript snippet?", "input": "let area = 6 * 5;\nlet radius = area / 3.14;", "output": "The output of the JavaScript snippet is the radius, which is 1.91." }, [1]
The UI note for now would get us very far but by filtering out demonstrations that retrieve or compute information should be filtered out.
Symbol tuning [2] is addressing the quality of demonstrations but we can take it further by removing retrievals and computations altogether.
Bonus: we can demonstrate how to make it respond so that the user/agent be informed of how to compute or retrieve.
1: https://github.com/sahil280114/codealpaca/commit/0d265112c70...
2: https://arxiv.org/abs/2305.08298
- How to Finetune GPT Like Large Language Models on a Custom Dataset
- Ask HN: Those with success using GPT-4 for programming – what are you doing?
-
Is there a colab or guide for fine tuning a 13b model for instruction following?
I found guides like this: https://github.com/sahil280114/codealpaca
-
Can LLMs do static code analysis?
Try, https://github.com/sahil280114/codealpaca, or we’re you trying to stick with more generalist models?
-
LoRA in LLaMAc++? Converting to 4bit? How to use models that are split into multiple .bin ?
Oh, I see. That makes sense. I'm also sleep deprived over here so my reading comprehension is a bit low ;|. Well in that case check out this link: https://github.com/sahil280114/codealpaca
-
Cerebras-GPT: A Family of Open, Compute-Efficient, Large Language Models
Sorry for the late reply, as I said Flan-UL2 (or Flan-T5 if you want lighter models) fine-tuned against a dataset like CodeAlpaca's[0] is probably the best solution if it's intended for commercial use (otherwise LLaMa should perform better).
[0]: https://github.com/sahil280114/codealpaca
- CodeAlpaca – Instruction following code generation model
alpaca_lora_4bit
-
Open Inference Engine Comparison | Features and Functionality of TGI, vLLM, llama.cpp, and TensorRT-LLM
For training there is also https://github.com/johnsmith0031/alpaca_lora_4bit
-
Quantized 8k Context Base Models for 4-bit Fine Tuning
I've been trying to fine tune an erotica model on some large context chat history (reverse proxy logs) and a literotica-instruct dataset I made, with a max context of 8k. The large context size eats a lot of VRAM so I've been trying to find the most efficient way to experiment considering I'd like to do multiple runs to test some ideas. So I'm going to try and use https://github.com/johnsmith0031/alpaca_lora_4bit, which is supposed to train faster and use less memory than qlora.
-
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
Follow up the popular work of u/tloen alpaca-lora, I wrapped the setup of alpaca_lora_4bit to add support for GPTQ training in form of installable pip packages. You can perform training and inference with multiple quantizations method to compare the results.
-
Does we still need monkey patch with exllama loader for lora?
" Using LoRAs with GPTQ-for-LLaMa This requires using a monkey patch that is supported by this web UI: https://github.com/johnsmith0031/alpaca_lora_4bit"
-
Why isn’t QLoRA being used more widely for fine tuning models?
4-bit GPTQ LoRA training was available since early April. I did not see any comparison to it in the QLoRA paper or even a mention, so it makes me think they were not aware it already existed.
- Fine-tuning with alpaca_lora_4bit on 8k context SuperHOT models
-
Any guide/intro to fine-tuning anywhere?
https://github.com/johnsmith0031/alpaca_lora_4bit is still the SOTA - Faster than qlora, trains on a GPTQ base.
-
"Samantha-33B-SuperHOT-8K-GPTQ" now that's a great name for a true model.
I would also like to know how one would finetune this in 4 bit? I think one could take the merged 8K PEFT with the LLaMA weights, and then quantize it to 4 bit, and then train with https://github.com/johnsmith0031/alpaca_lora_4bit ?
-
Help with QLoRA
I was under the impression that you just git clone this repo into text-generation-webui/repositories (so you would have GPTQ_for_Llama and alpaca_lora_4bit in the folder), and then just load with monkey patch. Is that not correct? I also tried just downloading alpaca_lora_4bit on its own, git cloning text-gen-webui within it, and installing requirements.txt for both and running with monkey patch. I was following the sections of alpaca_lora_4bit, "Text Generation Webui Monkey Patch" and "monkey patch inside webui"
-
Best uncensored model for an a6000
I dont have any familiarity with esxi, but I can say that there are quite a few posts about people doing it on proxmox. I've currently got a machine with 2x3090 passing through to VM's. When I'm training, I pass them both through to the same VM and can do lora 4-bit training on llama33 using https://github.com/johnsmith0031/alpaca_lora_4bit. Then, at inference time, I run a single card into a different VM, and have an extra card available for experimentation.
What are some alternatives?
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
flash-attention - Fast and memory-efficient exact attention
alpaca-electron - The simplest way to run Alpaca (and other LLaMA-based local LLMs) on your own computer
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
llm-code - An OpenAI LLM based CLI coding assistant.
StableLM - StableLM: Stability AI Language Models
llm-humaneval-benchmarks
safetensors - Simple, safe way to store and distribute tensors
awesome-ai-coding - Awesome AI Coding
alpaca-lora - Instruct-tune LLaMA on consumer hardware
openplayground-api - A reverse engineered Python API wrapper for OpenPlayground (nat.dev)
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.