text-embeddings-inference
qlora
text-embeddings-inference | qlora | |
---|---|---|
3 | 80 | |
2,073 | 9,500 | |
10.6% | - | |
8.9 | 7.4 | |
4 days ago | 8 months ago | |
Rust | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
text-embeddings-inference
-
HuggingFace text-generation-inference is reverting to Apache 2.0 License
Worth noting that this also impacts the great https://github.com/huggingface/text-embeddings-inference, which allows anyone to run state of the art embeddings with great performance.
- FLaNK Stack Weekly for 30 Oct 2023
- Fast inference for text models using Rust
qlora
- FLaNK Stack Weekly for 30 Oct 2023
-
I released Marx 3B V3.
Marx 3B V3 is StableLM 3B 4E1T instruction tuned on EverythingLM Data V3(ShareGPT Format) for 2 epochs using QLoRA.
-
Tuning and Testing Llama 2, Flan-T5, and GPT-J with LoRA, Sematic, and Gradio
https://github.com/artidoro/qlora
The tools and mechanisms to get a model to do what you want is ever so changing, ever so quickly. Build and understand a notebook yourself, and reduce dependencies. You will need to switch them.
-
Yet another QLoRA tutorial
My own project right now is still in raw generated form, and this now makes me think about trying qlora's scripts since this gives me some confidence I should be able to get it to turn out now that someone else has carved a path and charted the map. I was going to target llamatune which was mentioned here the other day.
-
Creating a new Finetuned model
Most papers I did read showed at least a thousand, even 10000 at several cases, so I assumed that to be the trend in the case of Low rank adapter(PEFT) training.(source: [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs (arxiv.org) , Stanford CRFM (Alpaca) and the minimum being openchat/openchat · Hugging Face ; There are a lot more examples)
-
[R] LaVIN-lite: Training your own Multimodal Large Language Models on one single GPU with competitive performance! (Technical Details)
4-bit quantization training mainly refers to qlora. Simply put, qlora quantizes the weights of the LLM into 4-bit for storage, while dequantizing them into 16-bit during the training process to ensure training precision. This method significantly reduces GPU memory overhead during training (the training speed should not vary much). This approach is highly suitable to be combined with parameter-efficient methods. However, the original paper was designed for single-modal LLMs and the code has already been wrapped in HuggingFace's library. Therefore, we extracted the core code from HuggingFace's library and migrated it into LaVIN's code. The main principle is to replace all linear layers in LLM with 4-bit quantized layers. Those interested can refer to our implementation in quantization.py and mm_adaptation.py, which is roughly a dozen lines of code.
-
[D] To all the machine learning engineers: most difficult model task/type you’ve ever had to work with?
There have been some new development like QLora which help fine-tune LLMs without updating all the weights.
-
Finetune MPT-30B using QLORA
This might be helpful: https://github.com/artidoro/qlora/issues/10
-
is lora fine-tuning on 13B/33B/65B comparable to full fine-tuning?
curious, since qlora paper only reports lora/qlora comparison for full fine-tuning for small 7B models.for 13B/33B/65B, it does not do so (table 4 in paper)it would be helpful if anyone can please provide links where I can read more on efficacy of lora or disadvantages of lora?
-
Need a detailed tutorial on how to create and use a dataset for QLoRA fine-tuning.
This might not be appropriate answer but did you take a look at this repository? https://github.com/artidoro/qlora With artidoro's repository it's pretty easy to train qlora. You just prepare your own dataset and run the following command: python qlora.py --model_name_or_path --dataset="path/to/your/dataset" --dataset_format="self-instruct" This is only available for several dataset formats. But every dataset format has to have input-output pairs. So the dataset json format has to be like this [ { “input”: “something ”, “output”:“something ” }, { “input”: “something ”, “output”:“something ” } ]
What are some alternatives?
llama-node - Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
smartgpt - A program that provides LLMs with the ability to complete complex tasks using plugins.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
auto-rust - auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing procedural macros.
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
floneum - A toolkit for controllable, private AI on consumer hardware in rust
ggml - Tensor library for machine learning
openv0 - AI generated UI components
alpaca_lora_4bit
CSGHub - CSGHub is an opensource large model assets platform just like on-premise huggingface which helps to manage datasets, model files, codes and more. CSGHub是一个开源、可信的大模型资产管理平台,可帮助用户治理LLM和LLM应用生命周期中涉及到的资产(数据集、模型文件、代码等)。CSGHub提供类似私有化的Huggingface功能,以类似OpenStack Glance管理虚拟机镜像、Harbor管理容器镜像以及Sonatype Nexus管理制品的方式,实现对LLM资产的管理。欢迎关注反馈和Star⭐️
llm-foundry - LLM training code for Databricks foundation models