h2o-llmstudio
peft
h2o-llmstudio | peft | |
---|---|---|
13 | 26 | |
3,614 | 13,962 | |
3.3% | 4.7% | |
9.3 | 9.7 | |
about 18 hours ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
h2o-llmstudio
- Paid dev gig: develop a basic LLM PEFT finetuning utility
-
building LLM model to answer question
Vector databases are probably a good place to start, though you've already tried LlamaIndex. You might want to try https://github.com/h2oai/h2o-llmstudio and https://github.com/h2oai/h2ogpt.
- [P] Uptraining a pretrained model using company data?
-
Permissive LLaMA 7b chat/instruct model
Training framework: https://github.com/h2oai/h2o-llmstudio
-
Is what I need possible currently?
Check out LLM Studio for fine tuning LLMs. Open source: https://github.com/h2oai/h2o-llmstudio
- FLaNK Stack Weekly for 30 April 2023
- FLaNK Stack Weekly for 24April2023
- GitHub - h2oai/h2o-llmstudio: H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs
- New Open Source Framework and No-Code GUI for Fine-Tuning LLMs: H2O LLM Studio
- Can an average person learn how to build a LLM model?
peft
- LoftQ: LoRA-fine-tuning-aware Quantization
-
Fine Tuning Mistral 7B on Magic the Gathering Draft
There is not a lot of great content out there making this clear, but basically all that matters for basic fine tuning is how much VRAM you have -- since the 3090 / 4090 have 24GB VRAM they're both pretty decent fine tuning chips. I think you could probably fine-tune a model up to ~13B parameters on one of them with PEFT (https://github.com/huggingface/peft)
-
Whisper prompt tuning
Hi everyone. Recently I've been looking into the PEFT library (https://github.com/huggingface/peft) and I was wondering if it would be possible to do prompt tuning with OpenAI's Whisper model. They have an example notebook for tuning Whisper with LoRA (https://colab.research.google.com/drive/1vhF8yueFqha3Y3CpTHN6q9EVcII9EYzs?usp=sharing) but I'm not sure how to go about changing it to use prompt tuning instead.
-
Code Llama - The Hugging Face Edition
In the coming days, we'll work on sharing scripts to train models, optimizations for on-device inference, even nicer demos (and for more powerful models), and more. Feel free to like our GitHub repos (transformers, peft, accelerate). Enjoy!
- PEFT 0.5 supports fine-tuning GPTQ models
-
Exploding loss when trying to train OpenOrca-Platypus2-13B
image
-
[D] Is there a difference between p-tuning and prefix tuning ?
I discussed part of this here: https://github.com/huggingface/peft/issues/123
-
How does using QLoRAs when running Llama on CPU work?
It seems like the merge_and_unload function in this PEFT script might be what they are referring to: https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py
-
How to merge the two weights into a single weight?
To obtain the original llama model, one may refer to this doc. To merge a lora model with a base model, one may refer to PEFT or use the merge script provided by LMFlow.
-
[D] [LoRA + weight merge every N step] for pre-training?
you could use a callback, like show here, https://github.com/huggingface/peft/issues/286 and call code to merge them here.
What are some alternatives?
h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
killport - A command-line tool to easily kill processes running on a specified port.
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
HealthGPT - Query your Apple Health data with natural language 💬 🩺
alpaca-lora - Instruct-tune LLaMA on consumer hardware
bark - 🔊 Text-Prompted Generative Audio Model
dalai - The simplest way to run LLaMA on your local machine
pandas-ai - Chat with your database (SQL, CSV, pandas, polars, mongodb, noSQL, etc). PandasAI makes data analysis conversational using LLMs (GPT 3.5 / 4, Anthropic, VertexAI) and RAG.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
ue5-llama-lora - A proof-of-concept project that showcases the potential for using small, locally trainable LLMs to create next-generation documentation tools.
minLoRA - minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.