qlora
basaran
Our great sponsors
qlora | basaran | |
---|---|---|
80 | 22 | |
9,344 | 1,281 | |
- | - | |
7.4 | 10.0 | |
7 months ago | 3 months ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
qlora
- FLaNK Stack Weekly for 30 Oct 2023
-
I released Marx 3B V3.
Marx 3B V3 is StableLM 3B 4E1T instruction tuned on EverythingLM Data V3(ShareGPT Format) for 2 epochs using QLoRA.
-
Tuning and Testing Llama 2, Flan-T5, and GPT-J with LoRA, Sematic, and Gradio
https://github.com/artidoro/qlora
The tools and mechanisms to get a model to do what you want is ever so changing, ever so quickly. Build and understand a notebook yourself, and reduce dependencies. You will need to switch them.
-
Yet another QLoRA tutorial
My own project right now is still in raw generated form, and this now makes me think about trying qlora's scripts since this gives me some confidence I should be able to get it to turn out now that someone else has carved a path and charted the map. I was going to target llamatune which was mentioned here the other day.
-
Creating a new Finetuned model
Most papers I did read showed at least a thousand, even 10000 at several cases, so I assumed that to be the trend in the case of Low rank adapter(PEFT) training.(source: [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs (arxiv.org) , Stanford CRFM (Alpaca) and the minimum being openchat/openchat · Hugging Face ; There are a lot more examples)
-
[R] LaVIN-lite: Training your own Multimodal Large Language Models on one single GPU with competitive performance! (Technical Details)
4-bit quantization training mainly refers to qlora. Simply put, qlora quantizes the weights of the LLM into 4-bit for storage, while dequantizing them into 16-bit during the training process to ensure training precision. This method significantly reduces GPU memory overhead during training (the training speed should not vary much). This approach is highly suitable to be combined with parameter-efficient methods. However, the original paper was designed for single-modal LLMs and the code has already been wrapped in HuggingFace's library. Therefore, we extracted the core code from HuggingFace's library and migrated it into LaVIN's code. The main principle is to replace all linear layers in LLM with 4-bit quantized layers. Those interested can refer to our implementation in quantization.py and mm_adaptation.py, which is roughly a dozen lines of code.
-
[D] To all the machine learning engineers: most difficult model task/type you’ve ever had to work with?
There have been some new development like QLora which help fine-tune LLMs without updating all the weights.
-
Finetune MPT-30B using QLORA
This might be helpful: https://github.com/artidoro/qlora/issues/10
-
is lora fine-tuning on 13B/33B/65B comparable to full fine-tuning?
curious, since qlora paper only reports lora/qlora comparison for full fine-tuning for small 7B models.for 13B/33B/65B, it does not do so (table 4 in paper)it would be helpful if anyone can please provide links where I can read more on efficacy of lora or disadvantages of lora?
-
Need a detailed tutorial on how to create and use a dataset for QLoRA fine-tuning.
This might not be appropriate answer but did you take a look at this repository? https://github.com/artidoro/qlora With artidoro's repository it's pretty easy to train qlora. You just prepare your own dataset and run the following command: python qlora.py --model_name_or_path --dataset="path/to/your/dataset" --dataset_format="self-instruct" This is only available for several dataset formats. But every dataset format has to have input-output pairs. So the dataset json format has to be like this [ { “input”: “something ”, “output”:“something ” }, { “input”: “something ”, “output”:“something ” } ]
basaran
- OpenLLM
-
Langchain and self hosted LLaMA hosted API
What are the current best "no reinventing the wheel" approaches to have Langchain use an LLM through a locally hosted REST API, the likes of Oobabooga or hyperonym/basaran with streaming support for 4-bit GPTQ?
-
Run and create custom ChatGPT-like bots with OpenChat
Disclaimer: I am curating LLM-tools on github [1]
A few thoughts:
* allow for custom endpoint URLs, this way people can use open source LLMs with a fake openAI API backend like basaran[2] or llama-api-server[3]
* look into better embedding methods for info-retrieval like InstructorEmbeddings or Document Summary Index
* Don't use a single embedding per content item, use multiple to increase retrieval quality
1 https://github.com/underlines/awesome-marketing-datascience/...
-
1-Jun-2023
open-source alternative to the OpenAI text completion API (https://github.com/hyperonym/basaran)
- Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
- Basaran is an open-source alternative to the OpenAI text completion API
-
Ask HN: What's the best self hosted/local alternative to GPT-4?
Guanaco-65B[0] using Basaran[1] for your OpenAI compatible API. You can use any ChatGPT front-end which lets you change the OpenAI endpoint URL.
[0] An fp4 finetune of LLaMA-30B by Tim Dettmers
-
Are all the finetunes stupid?
For lm-eval, I think you'd either need to take GPTQ's inference script and shim it into a model: https://github.com/EleutherAI/lm-evaluation-harness/tree/master/lm_eval/models or you might be able to use a project like https://github.com/hyperonym/basaran and then you could use the gpt3 model...
-
Using the API in Node
There are also: - Basaran repo: "Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models". "...Compatibility with OpenAI API and client libraries..."; - llama-cpp-python repo: "Simple Python bindings for @ggerganov's llama.cpp library...". "...OpenAI-like API...".
-
Researcher looking for help with how to prepare a finetuning dataset for models like Bloomz and Cerebras-GPT
I want to start with a totally freely available model, so again, that excludes things like LLaMA where the weights are only available through a wait list. The two models that most get my attention and (I think, and hope) fit my criteria of open availability are Cerebras-GPT (13b) and Bloomz (7b). The tools to process and fine-tune that seem most feasible to me, from my limit knowledge, are xturing and basaran.
What are some alternatives?
alpaca-lora - Instruct-tune LLaMA on consumer hardware
text-generation-inference - Large Language Model Text Generation Inference
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
openai-chatgpt-opentranslator - Python command that uses openai to perform text translations
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
AutoGPTQ - An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
ggml - Tensor library for machine learning
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
alpaca_lora_4bit
llm-foundry - LLM training code for MosaicML foundation models
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM