Our great sponsors
-
text-generation-webui-testing
A fork of textgen that still supports V1 GPTQ, 4-bit lora and other GPTQ models besides llama.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
If it also has QLoRA that would be the best but afaik it's not implemented in bitsandbytes yet?
i've never tried that particular one. everything else I threw at it trained through : https://github.com/Ph0rk0z/text-generation-webui-testing/ successfully.
Probably need to add universal support to the native functions because it uses llama only. If you edit the load_llama functions in autograd py to use generic stuff like this: https://github.com/Ph0rk0z/GPTQ-Merged/blob/dual-model/src/alpaca_lora_4bit/autograd_4bit.py it has a good chance of working. Might need to also add trust_remote_code.
Finetuning on multiple GPUs works pretty much out of the box for every finetune project I've tried. Here's the best finetune codebase I'd found that supports QLoRA: https://github.com/OpenAccess-AI-Collective/axolotl
Related posts
- Show HN: Free GitHub Copilot CLI with your own model or API
- Einsum in 40 Lines of Python
- Show HN: Cognita – open-source RAG framework for modular applications
- Show HN: Data Bonsai: a Python package to clean your data with LLMs
- Ask HN: Seeking On-Premises Website Examples for Uptime Comparison Experiment