Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression
Why do you think that https://github.com/bupticybee/FastLoRAChat is a good alternative to gpt-j-fine-tuning-example
Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression
Why do you think that https://github.com/bupticybee/FastLoRAChat is a good alternative to gpt-j-fine-tuning-example