Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
Why do you think that https://github.com/oobabooga/one-click-installers is a good alternative to llama-mps
Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
Why do you think that https://github.com/oobabooga/one-click-installers is a good alternative to llama-mps