Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
Why do you think that https://github.com/ggerganov/llama.cpp is a good alternative to llama-mps
Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
Why do you think that https://github.com/ggerganov/llama.cpp is a good alternative to llama-mps