llama-mps
amx
llama-mps | amx | |
---|---|---|
4 | 19 | |
83 | 878 | |
- | - | |
3.8 | 4.1 | |
9 months ago | 3 months ago | |
Python | C | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama-mps
-
llama.cpp now officially supports GPU acceleration.
There are currently at least 3 ways to run llama on m1 with GPU acceleration. - mlc-llm (pre-built, only 1 model has been ported) - tinygrad (very memory efficient, not that easy to integrate into other projects) - llama-mps (original llama codebase + llama adapter support)
-
LLaMA-7B in Pure C++ with full Apple Silicon support
There is also a gpu-acelerated fork of the original repo
https://github.com/remixer-dec/llama-mps
- Llama-CPU: Fork of Facebooks LLaMa model to run on CPU
-
[D] Tutorial: Run LLaMA on 8gb vram on windows (thanks to bitsandbytes 8bit quantization)
I tried to port the llama-cpu version to a gpu-accelerated mps version for macs, it runs, but the outputs are not as good as expected and it often gives "-1" tokens. Any help and contributions on fixing it are welcome!
amx
-
Apple Introduces M4 Chip
Apple has the NPU (also called Apple Neural Engine), which is specific hardware for running inference. Can't be used for LLMs though at the moment, maybe the M4 will be different. They also have a vector processor attached to the performance cluster of the CPU, they call the instruction set for it AMX. I believe that that one can be leveraged for faster LLM inferencing.
https://github.com/corsix/amx
-
Optimize sgemm on RISC-V platform
I am talking about the matrix/vector coprocessor (AMX). You can find some reverse-engineered documentation here: https://github.com/corsix/amx
On M3 a singe matrix block can achieve ~ 1TFLOP on DGEMM, I assume it will be closer to 4TFLOPS for SGEMM. The Max variants have two such blocks. Didn't do precise benchmarking myself, but switching Python/R matrix libraries to use Apple's BLAS result in 5-6x perf improvement on matrix heavy code for me.
-
Intel AMX
It's really cool. I hope it becomes more common for training/inference/numerics capable accelerators to be included in consumer hardware.
Apple's AMX is really under-documented, while the instructions were reverse engineered, Virtually no benchmarks are available comparing current chip generations, models and variants.
https://github.com/corsix/amx
- Why do x86 processors take up so much energy when compared to ARM?
-
Bfloat16 support coming to Apple's Metal and PyTorch [video]
Visible in the unofficial documentation for AMX instructions too - M2 only bf16 functionality - https://github.com/corsix/amx/blob/main/matfp.md
-
LLaMA-7B in Pure C++ with full Apple Silicon support
Confusingly there are 2 mechanisms to do matrix operations on the new apple hardware - AMX (https://github.com/corsix/amx) - and the ANE (apple neural engine) - which is enabled by CoreML. This code does not run on the neural engine but the author has a branch for his whisper.cpp project which uses it here: https://github.com/ggerganov/whisper.cpp/pull/566 - so it may not be long before we see it applied here as well. All of this is to say that it actually could get significantly faster if some of this work was able to be handed to the ANE with CoreML.
-
Linux 6.2: The first mainstream Linux kernel for Apple M1 chips arrives
really? seems pretty well documented here: https://github.com/corsix/amx
-
AMX: The Secret Apple M1 Coprocessor
Article is almost two years old, and has a huge correction at the bottom. It's just a proprietary ISA extension, there's even a repo documenting what's been reverse engineered.
- corsix/amx: Apple AMX Instruction Set
-
Show HN: Port of OpenAI's Whisper model in C/C++
You are correct, in that those are the four
My understanding is that the AMX is more tightly wound with the CPU, ultimately being accessible via an instruction set (https://github.com/corsix/amx), and it is useful if you need to do matrix multiplications interleaved with other CPU tasks. A common example would be a VIO loop or something where you want that data in the CPU caches.
The GPU and Neural Engine are not that – they take some time to set up and initialize. They also can parallelize tasks to a much higher degree. The GPU is more generalizable, because you can write compute shaders to do anything in parallel, but it uses a lot of resources. I'll have to check out the PR to see how exactly the MPS shaders match up with the task at hand, because you could also consider writing Metal compute shaders by hand.
I know the least about the ANE, but it has specific hardware for running ML models, and you have to process the weights ahead of time to make sure they are in the right format. It can run ML models very efficiently and is the most battery friendly.
What are some alternatives?
llama - Inference code for Llama models
emacs-pure
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
whisper.cpp - Port of OpenAI's Whisper model in C/C++
awesome-ml - Curated list of useful LLM / Analytics / Datascience resources
sentencepiece - Unsupervised text tokenizer for Neural Network-based text generation.
llama - Inference code for LLaMA models
whisper.cpp - Port of OpenAI's Whisper model in C/C++
LLaMA_MPS - Run LLaMA inference on Apple Silicon GPUs.
amx-rs - Rust wrapper for Apple Matrix Coprocessor (AMX) instructions
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️
mighty-snitch - noticing and preventing network requests should be easy