-
Theirs requires you to rewrite the whole model and replace every layer you want to apply LoRA to to the LoRA counterpart, or use monky-patching. Mine utilizes PyTorch parametrizations to inject the LoRA logic to existing models. If your model has nn.Linear, you can call add_lora(model) to add LoRA to all the linear layers. And it's not limited to Linear, you can see how I extended it to Embedding, Conv2d in a couple lines of code. https://github.com/cccntu/minLoRA/blob/main/minlora/model.py
-
Sevalla
Deploy and host your apps and databases, now with $50 credit! Sevalla is the PaaS you have been looking for! Advanced deployment pipelines, usage-based pricing, preview apps, templates, human support by developers, and much more!
-
Sorry, but why do we need another package? Can't you build on top of https://github.com/huggingface/peft ?
Related posts
-
[D] Is it possible to train the same LLM instance on different users' data?
-
You can use Dreambooth fine tuned models to generate videos (explanation)
-
Deploying AI Models with Amazon Web Services: A Practical Guide
-
Zephyr 141B, a Mixtral 8x22B fine-tune, is now available in Hugging Chat
-
Building an AI Game Bot 🤖Using Imitation Learning and 3D Convolution ResNet