Finetune llama2-70b and codellama on MacBook Air without quantization
Why do you think that https://github.com/haotian-liu/LLaVA is a good alternative to slowllama
Finetune llama2-70b and codellama on MacBook Air without quantization
Why do you think that https://github.com/haotian-liu/LLaVA is a good alternative to slowllama