Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
Why do you think that https://github.com/shawwn/llama-dl is a good alternative to llama-mps
Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
Why do you think that https://github.com/shawwn/llama-dl is a good alternative to llama-mps