Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression
Here you can share your experience with the project you are suggesting or its comparison with gpt-j-fine-tuning-example. Optional.
A valid email to send you a verification link when necessary or log in.