amx VS neural-engine

Compare amx vs neural-engine and see what are their differences.

amx

Apple AMX Instruction Set (by corsix)

neural-engine

Everything we actually know about the Apple Neural Engine (ANE) (by hollance)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
amx neural-engine
18 20
859 1,866
- -
4.1 5.1
2 months ago about 1 month ago
C
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

amx

Posts with mentions or reviews of amx. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-28.
  • Optimize sgemm on RISC-V platform
    6 projects | news.ycombinator.com | 28 Feb 2024
    I am talking about the matrix/vector coprocessor (AMX). You can find some reverse-engineered documentation here: https://github.com/corsix/amx

    On M3 a singe matrix block can achieve ~ 1TFLOP on DGEMM, I assume it will be closer to 4TFLOPS for SGEMM. The Max variants have two such blocks. Didn't do precise benchmarking myself, but switching Python/R matrix libraries to use Apple's BLAS result in 5-6x perf improvement on matrix heavy code for me.

  • Intel AMX
    4 projects | news.ycombinator.com | 19 Jan 2024
    It's really cool. I hope it becomes more common for training/inference/numerics capable accelerators to be included in consumer hardware.

    Apple's AMX is really under-documented, while the instructions were reverse engineered, Virtually no benchmarks are available comparing current chip generations, models and variants.

    https://github.com/corsix/amx

  • Why do x86 processors take up so much energy when compared to ARM?
    1 project | /r/hardware | 8 Dec 2023
  • Bfloat16 support coming to Apple's Metal and PyTorch [video]
    1 project | news.ycombinator.com | 3 Jul 2023
    Visible in the unofficial documentation for AMX instructions too - M2 only bf16 functionality - https://github.com/corsix/amx/blob/main/matfp.md
  • LLaMA-7B in Pure C++ with full Apple Silicon support
    19 projects | news.ycombinator.com | 10 Mar 2023
    Confusingly there are 2 mechanisms to do matrix operations on the new apple hardware - AMX (https://github.com/corsix/amx) - and the ANE (apple neural engine) - which is enabled by CoreML. This code does not run on the neural engine but the author has a branch for his whisper.cpp project which uses it here: https://github.com/ggerganov/whisper.cpp/pull/566 - so it may not be long before we see it applied here as well. All of this is to say that it actually could get significantly faster if some of this work was able to be handed to the ANE with CoreML.
  • Linux 6.2: The first mainstream Linux kernel for Apple M1 chips arrives
    7 projects | news.ycombinator.com | 20 Feb 2023
    really? seems pretty well documented here: https://github.com/corsix/amx
  • AMX: The Secret Apple M1 Coprocessor
    1 project | /r/apple | 14 Dec 2022
    Article is almost two years old, and has a huge correction at the bottom. It's just a proprietary ISA extension, there's even a repo documenting what's been reverse engineered.
  • corsix/amx: Apple AMX Instruction Set
    1 project | /r/programming | 9 Dec 2022
  • Show HN: Port of OpenAI's Whisper model in C/C++
    9 projects | news.ycombinator.com | 6 Dec 2022
    You are correct, in that those are the four

    My understanding is that the AMX is more tightly wound with the CPU, ultimately being accessible via an instruction set (https://github.com/corsix/amx), and it is useful if you need to do matrix multiplications interleaved with other CPU tasks. A common example would be a VIO loop or something where you want that data in the CPU caches.

    The GPU and Neural Engine are not that – they take some time to set up and initialize. They also can parallelize tasks to a much higher degree. The GPU is more generalizable, because you can write compute shaders to do anything in parallel, but it uses a lot of resources. I'll have to check out the PR to see how exactly the MPS shaders match up with the task at hand, because you could also consider writing Metal compute shaders by hand.

    I know the least about the ANE, but it has specific hardware for running ML models, and you have to process the weights ahead of time to make sure they are in the right format. It can run ML models very efficiently and is the most battery friendly.

  • Ask HN: Are there any undocumented ISA extensions used in Linux systems?
    1 project | news.ycombinator.com | 19 Oct 2022
    If someone were to build a Linux system with proprietary ISA extensions, how would they do it given Linux is open source? Are there any examples of this being done? Would it be possible at all?

    I got inspiration from this (https://github.com/corsix/amx) and I wondered if someone has done it before on a Linux-based system. I understand a userspace library could be created to access those instructions from userspace, but how would then they be implemented in the kernel? Through a proprietary kernel module built using a custom compiler? Or is that not needed at all and the library could just run on the processor taking advantage of the proprietary extensions?

neural-engine

Posts with mentions or reviews of neural-engine. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-28.
  • Optimize sgemm on RISC-V platform
    6 projects | news.ycombinator.com | 28 Feb 2024
    yep. they have a neural engine that is separate from the CPU and GPU that does really fast matmuls https://github.com/hollance/neural-engine. it's basically completely undocumented.
  • Apple is adding more and more neural engine cores to their products, is there any way to use them for local LLMs?
    2 projects | /r/LocalLLaMA | 7 Jun 2023
    Looks like the ANE ("Apple Neural Engine") cores are powerful but not as flexible/programmable as the GPU cores. There is no sign that LLM inference is possible with them or ever will be unless Apple either opens up the closed ANE software framework for extensibility or they extend the ANE framework to support modern LLMs themselves. I would not hold my breath.
  • Anthropic’s $5B, 4-year plan to take on OpenAI
    6 projects | news.ycombinator.com | 11 Apr 2023
    If Apple would wake up to what's happening with llama.cpp etc then I don't see such a big role for paying for remote access to big models via API

    Currently a Macbook has a Neural Engine that is sitting idle 99% of the time and only suitable for running limited models (poorly documented, opaque rules about what ops can be accelerated, a black box compiler [1] and an apparent 3GB model size limit [2])

    OTOH you can buy a Macbook with 64GB 'unified' memory and a Neural Engine today

    If you squint a bit and look into the near future it's not so hard to imagine a future Mx chip with a more capable Neural Engine and yet more RAM, and able to run the largest GPT3 class models locally. (Ideally with better developer tools so other compilers can target the NE)

    And then imagine it does that while leaving the CPU+GPU mostly free to run apps/games ... the whole experience of using a computer could change radically in that case.

    I find it hard not to think this is coming within 5 years (although equally, I can imagine this is not on Apple's roadmap at all currently)

    [1] https://github.com/hollance/neural-engine

  • Everything we actually know about the Apple Neural Engine (ANE)
    1 project | /r/apple | 26 Mar 2023
    1 project | /r/programming | 25 Mar 2023
  • What we know about the Apple Neural Engine
    1 project | /r/patient_hackernews | 25 Mar 2023
    1 project | /r/hackernews | 25 Mar 2023
  • Everything we know about the Apple Neural Engine (ANE)
    1 project | /r/hypeurls | 25 Mar 2023
    9 projects | news.ycombinator.com | 25 Mar 2023
    My question too. This semi-answer on the page seems to contradict itself (source: https://github.com/hollance/neural-engine/blob/master/docs/p... ):

    "> Can I program the ANE directly?

    Unfortunately not. You can only use the Neural Engine through Core ML at the moment.

    There currently is no public framework for programming the ANE. There are several private, undocumented frameworks but obviously we cannot use them as Apple rejects apps that use private frameworks.

    (Perhaps in the future Apple will provide a public version of AppleNeuralEngine.framework.)"

    The last part links to this bunch of headers:

    https://github.com/nst/iOS-Runtime-Headers/tree/master/Priva...

    So might it be more accurate to say you can program it directly, but won't end up with something that can be distributed on the app store?

  • Apple VP Bob Borchers says Apple Silicon changed tech industry by pushing for energy efficiency
    1 project | /r/apple | 2 Mar 2023
    Read between their buzzwords. Apple's Neural Engine does nothing for training. It's purely for inference and it still requires the developers to go through their API. If a model uses a type layer Apple doesn't support, it's back to the CPU/GPU.

What are some alternatives?

When comparing amx and neural-engine you can also consider the following projects:

emacs-pure

Dual-Edge-TPU-Adapter - Dual Edge TPU Adapter to use it on a system with single PCIe port on m.2 A/B/E/M slot

whisper.cpp - Port of OpenAI's Whisper model in C/C++

pyllms - Minimal Python library to connect to LLMs (OpenAI, Anthropic, AI21, Cohere, Aleph Alpha, HuggingfaceHub, Google PaLM2, with a built-in model performance benchmark.

sentencepiece - Unsupervised text tokenizer for Neural Network-based text generation.

ANECompat - A tool which checks compatibility of CoreML model with Apple Neural Engine

whisper.cpp - Port of OpenAI's Whisper model in C/C++

pytorch-apple-silicon-benchmarks - Performance of PyTorch on Apple Silicon

llama-mps - Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2

tensorexperiments - Boilerplate for GPU-Accelerated TensorFlow and PyTorch code on M1 Macbook

amx-rs - Rust wrapper for Apple Matrix Coprocessor (AMX) instructions

more-ane-transformers - Run transformers (incl. LLMs) on the Apple Neural Engine.